Find two factors for right shift - c#

Community,
Assume we have a random integer which is in the range Int32.MinValue - Int32.MaxValue.
I'd like find two numbers which result in this integer when calculated together using the right shift operator.
An example :
If the input value is 123456 two possible output values could be 2022703104 and 14, because 2022703104 >> 14 == 123456
Here is my attempt:
private static int[] DetermineShr(int input)
{
int[] arr = new int[2];
if (input == 0)
{
arr[0] = 0;
arr[1] = 0;
return arr;
}
int a = (int)Math.Log(int.MaxValue / Math.Abs(input), 2);
int b = (int)(input * Math.Pow(2, a));
arr[0] = a;
arr[1] = b;
return arr;
}
However for some negativ values it doesn't work, the output won't result in a correct calculation.
And for very small input values such as -2147483648 its throwing an exception :
How can I modify my function so it will produce a valid output for all input values between Int32.MinValue and Int32.MaxValue ?

Well, let's compare
123456 == 11110001001000000
‭ 2022703104 == 1111000100100000000000000000000‬
can you see the pattern? If you're given shift (14 in your case) the answer is
(123456 << shift) + any number in [0..2 ** (shift-1)] range
however, on large values left shift can result in integer overflow; if shift is small (less than 32) I suggest using long:
private static long Factor(int source, int shift) {
unchecked {
// (uint): we want bits, not two complement
long value = (uint) source;
return value << shift;
}
}
Test:
int a = -1;
long b = Factor(-1, 3);
Console.WriteLine(a);
Console.WriteLine(Convert.ToString(a, 2));
Console.WriteLine(b);
Console.WriteLine(Convert.ToString(b, 2))
will return
-1
‭11111111111111111111111111111111
34359738360
‭11111111111111111111111111111111000‬
please, notice, that negative integers being two's complements
https://en.wikipedia.org/wiki/Two%27s_complement
are, in fact, quite large when treated as unsigned integers

Related

Factorial method returns 0 when dealing with big numbers [duplicate]

int n = Convert.ToInt32(Console.ReadLine());
int factorial = 1;
for (int i = 1; i <= n; i++)
{
factorial *= i;
}
Console.WriteLine(factorial);
This code runs in Console Application, but when a number is above 34 the application returns 0.
Why is 0 returned and what can be done to compute factorial of large numbers?
You're going out of range of what the variable can store. That's effectively a factorial, which grows faster than the exponential. Try using ulong (max value 2^64 = 18,446,744,073,709,551,615) instead of int (max value 2^31 = 2,147,483,647) - ulong p = 1 - that should get you a bit further.
If you need to go even further, .NET 4 and up has BigInteger, which can store arbitrarily large numbers.
You are getting 0 because of the way integer overflow handled in most programming languages. You can easily see what happens if you output results of each computation in a loop (using HEX representation):
int n = Convert.ToInt32(Console.ReadLine());
int factorial = 1;
for (int i = 1; i <= n; i++)
{
factorial *= i;
Console.WriteLine("{0:x}", factorial);
}
Console.WriteLine(factorial);
For n = 34 result look like:
1
2
6
18
78
2d0
13b0
...
2c000000
80000000
80000000
0
Basically multiplying by 2 shifts numbers left and when you multiplied numberer containing enough twos all significant digits will fall out of integer which is 32 bits wide (i.e. first 6 numbers give you 4 twos : 1, 2, 3, 2*2, 5, 2*3, so result of multipying them is 0x2d0 with 4 zero bits at the end).
If you are using .net 4.0 and want to calculate factorial of 1000, then try to use BigInteger instead of Int32 or Int64 or even UInt64. Your problem statement "doesn't work" is not quite sufficient for me to give some good subjection.
Your code will look something like:
using System;
using System.Numerics;
namespace ConsoleApplication1
{
class Program
{
static void Main()
{
int factorial = Convert.ToInt32(Console.ReadLine());
var result = CalculateFactorial(factorial);
Console.WriteLine(result);
Console.ReadLine();
}
private static BigInteger CalculateFactorial(int value)
{
BigInteger result = new BigInteger(1);
for (int i = 1; i <= value; i++)
{
result *= i;
}
return result;
}
}
}

Random number between int.MinValue and int.MaxValue, inclusive

Here's a bit of a puzzler: Random.Next() has an overload that accepts a minimum value and a maximum value. This overload returns a number that is greater than or equal to the minimum value (inclusive) and less than the maximum value (exclusive).
I would like to include the entire range including the maximum value. In some cases, I could accomplish this by just adding one to the maximum value. But in this case, the maximum value can be int.MaxValue, and adding one to this would not accomplish what I want.
So does anyone know a good trick to get a random number from int.MinValue to int.MaxValue, inclusively?
UPDATE:
Note that the lower range can be int.MinValue but can also be something else. If I know it would always be int.MinValue then the problem would be simpler.
The internal implementation of Random.Next(int minValue, int maxValue) generates two samples for large ranges, like the range between Int32.MinValue and Int32.MaxValue. For the NextInclusive method I had to use another large range Next, totaling four samples. So the performance should be comparable with the version that fills a buffer with 4 bytes (one sample per byte).
public static class RandomExtensions
{
public static int NextInclusive(this Random random, int minValue, int maxValue)
{
if (maxValue == Int32.MaxValue)
{
if (minValue == Int32.MinValue)
{
var value1 = random.Next(Int32.MinValue, Int32.MaxValue);
var value2 = random.Next(Int32.MinValue, Int32.MaxValue);
return value1 < value2 ? value1 : value1 + 1;
}
return random.Next(minValue - 1, Int32.MaxValue) + 1;
}
return random.Next(minValue, maxValue + 1);
}
}
Some results:
new Random(0).NextInclusive(int.MaxValue - 1, int.MaxValue); // returns int.MaxValue
new Random(1).NextInclusive(int.MaxValue - 1, int.MaxValue); // returns int.MaxValue - 1
new Random(0).NextInclusive(int.MinValue, int.MinValue + 1); // returns int.MinValue + 1
new Random(1).NextInclusive(int.MinValue, int.MinValue + 1); // returns int.MinValue
new Random(24917099).NextInclusive(int.MinValue, int.MaxValue); // returns int.MinValue
var random = new Random(784288084);
random.NextInclusive(int.MinValue, int.MaxValue);
random.NextInclusive(int.MinValue, int.MaxValue); // returns int.MaxValue
Update: My implementation has mediocre performance for the largest possible range (Int32.MinValue - Int32.MaxValue), so I came up with a new one that is 4 times faster. It produces around 22,000,000 random numbers per second in my machine. I don't think that it can get any faster than that.
public static int NextInclusive(this Random random, int minValue, int maxValue)
{
if (maxValue == Int32.MaxValue)
{
if (minValue == Int32.MinValue)
{
var value1 = random.Next() % 0x10000;
var value2 = random.Next() % 0x10000;
return (value1 << 16) | value2;
}
return random.Next(minValue - 1, Int32.MaxValue) + 1;
}
return random.Next(minValue, maxValue + 1);
}
Some results:
new Random(0).NextInclusive(int.MaxValue - 1, int.MaxValue); // = int.MaxValue
new Random(1).NextInclusive(int.MaxValue - 1, int.MaxValue); // = int.MaxValue - 1
new Random(0).NextInclusive(int.MinValue, int.MinValue + 1); // = int.MinValue + 1
new Random(1).NextInclusive(int.MinValue, int.MinValue + 1); // = int.MinValue
new Random(1655705829).NextInclusive(int.MinValue, int.MaxValue); // = int.MaxValue
var random = new Random(1704364573);
random.NextInclusive(int.MinValue, int.MaxValue);
random.NextInclusive(int.MinValue, int.MaxValue);
random.NextInclusive(int.MinValue, int.MaxValue); // = int.MinValue
No casting, no long, all boundary cases taken into account, best performance.
static class RandomExtension
{
private static readonly byte[] bytes = new byte[sizeof(int)];
public static int InclusiveNext(this Random random, int min, int max)
{
if (max < int.MaxValue)
// can safely increase 'max'
return random.Next(min, max + 1);
// now 'max' is definitely 'int.MaxValue'
if (min > int.MinValue)
// can safely decrease 'min'
// so get ['min' - 1, 'max' - 1]
// and move it to ['min', 'max']
return random.Next(min - 1, max) + 1;
// now 'max' is definitely 'int.MaxValue'
// and 'min' is definitely 'int.MinValue'
// so the only option is
random.NextBytes(bytes);
return BitConverter.ToInt32(bytes, 0);
}
}
Well, I have a trick. I'm not sure I'd describe it as a "good trick", but I feel like it might work.
public static class RandomExtensions
{
public static int NextInclusive(this Random rng, int minValue, int maxValue)
{
if (maxValue == int.MaxValue)
{
var bytes = new byte[4];
rng.NextBytes(bytes);
return BitConverter.ToInt32(bytes, 0);
}
return rng.Next(minValue, maxValue + 1);
}
}
So, basically an extension method that will simply generate four bytes if the upper-bound is int.MaxValue and convert to an int, otherwise just use the standard Next(int, int) overload.
Note that if maxValue is int.MaxValue it will ignore minValue. Guess I didn't account for that...
Split the ranges in two, and compensate for the MaxValue:
r.Next(2) == 0 ? r.Next(int.MinValue, 0) : (1 + r.Next(-1, int.MaxValue))
If we make the ranges of equal size, we can get the same result with different math. Here we rely on the fact that int.MinValue = -1 - int.MaxValue:
r.Next(int.MinValue, 0) - (r.Next(2) == 0 ? 0 : int.MinValue)
I'd suggest using System.Numerics.BigInteger like this:
class InclusiveRandom
{
private readonly Random rnd = new Random();
public byte Next(byte min, byte max) => (byte)NextHelper(min, max);
public sbyte Next(sbyte min, sbyte max) => (sbyte)NextHelper(min, max);
public short Next(short min, short max) => (short)NextHelper(min, max);
public ushort Next(ushort min, ushort max) => (ushort)NextHelper(min, max);
public int Next(int min, int max) => (int)NextHelper(min, max);
public uint Next(uint min, uint max) => (uint)NextHelper(min, max);
public long Next(long min, long max) => (long)NextHelper(min, max);
public ulong Next(ulong min, ulong max) => (ulong)NextHelper(min, max);
private BigInteger NextHelper(BigInteger min, BigInteger max)
{
if (max <= min)
throw new ArgumentException($"max {max} should be greater than min {min}");
return min + RandomHelper(max - min);
}
private BigInteger RandomHelper(BigInteger bigInteger)
{
byte[] bytes = bigInteger.ToByteArray();
BigInteger random;
do
{
rnd.NextBytes(bytes);
bytes[bytes.Length - 1] &= 0x7F;
random = new BigInteger(bytes);
} while (random > bigInteger);
return random;
}
}
I tested it with sbyte.
var rnd = new InclusiveRandom();
var frequency = Enumerable.Range(sbyte.MinValue, sbyte.MaxValue - sbyte.MinValue + 1).ToDictionary(i => (sbyte)i, i => 0ul);
var count = 100000000;
for (var i = 0; i < count; i++)
frequency[rnd.Next(sbyte.MinValue, sbyte.MaxValue)]++;
foreach (var i in frequency)
chart1.Series[0].Points.AddXY(i.Key, (double)i.Value / count);
chart1.ChartAreas[0].AxisY.StripLines
.Add(new StripLine { Interval = 0, IntervalOffset = 1d / 256, StripWidth = 0.0003, BackColor = Color.Red });
Distribution is OK.
This is guaranteed to work with negative and non-negative values:
public static int NextIntegerInclusive(this Random r, int min_value, int max_value)
{
if (max_value < min_value)
{
throw new InvalidOperationException("max_value must be greater than min_value.");
}
long offsetFromZero =(long)min_value; // e.g. -2,147,483,648
long bound = (long)max_value; // e.g. 2,147,483,647
bound -= offsetFromZero; // e.g. 4,294,967,295 (uint.MaxValue)
bound += Math.Sign(bound); // e.g. 4,294,967,296 (uint.MaxValue + 1)
return (int) (Math.Round(r.NextDouble() * bound) + offsetFromZero); // e.g. -2,147,483,648 => 2,147,483,647
}
It's actually interesting that this isn't the implementation for Random.Next(int, int), because you can derive the behavior of exclusive from the behavior of inclusive, but not the other way around.
public static class RandomExtensions
{
private const long IntegerRange = (long)int.MaxValue - int.MinValue;
public static int NextInclusive(this Random random, int minValue, int maxValue)
{
if (minValue > maxValue)
{
throw new ArgumentOutOfRangeException(nameof(minValue));
}
var buffer = new byte[4];
random.NextBytes(buffer);
var a = BitConverter.ToInt32(buffer, 0);
var b = a - (long)int.MinValue;
var c = b * (1.0 / IntegerRange);
var d = c * ((long)maxValue - minValue + 1);
var e = (long)d + minValue;
return (int)e;
}
}
new Random(0).NextInclusive(int.MaxValue - 1, int.MaxValue); // returns int.MaxValue
new Random(1).NextInclusive(int.MaxValue - 1, int.MaxValue); // returns int.MaxValue - 1
new Random(0).NextInclusive(int.MinValue, int.MinValue + 1); // returns int.MinValue + 1
new Random(1).NextInclusive(int.MinValue, int.MinValue + 1); // returns int.MinValue
new Random(-451732719).NextInclusive(int.MinValue, int.MaxValue); // returns int.MinValue
new Random(-394328071).NextInclusive(int.MinValue, int.MaxValue); // returns int.MaxValue
As I understand it you want Random to put out a value between -2.147.483.648 and +2.147.483.647. But the problem is that Random given those values will only give values from -2.147.483.648 to +2.147.483.646, as the maximum is exclusive.
Option 0: Take the thing away and learn to do without it
Douglas Adams was not a Programmer AFAIK, but he has some good advice for us: "The technology involved in making anything invisible is so infinitely complex that nine hundred and ninety-nine billion, nine hundred and ninety-nine million, nine hundred and ninety-nine thousand, nine hundred and ninety-nine times out of a trillion it is much simpler and more effective just to take the thing away and do without it."
This might be such a case.
Option 1: We need a bigger Random!
Random.Next() uses Int32 as Argument. One option I can think off would be to use a different Random Function Which can take the next higher level of Integers (Int64) as input. An Int32 is implicitly cast into an Int64. Int64 Number = Int64(Int32.MaxValue)+1;
But afaik, you would have to go outside of .NET libraries to do this. At that point, you might as well look for a Random that is inclusive of the Max.
But I think there is a mathematical reason it had to exclude one value.
Option 2: Roll more
Another way is to use two calls of Random - each for one half of the range - and then add them.
Number1 = rng.Next(-2.147.483.648, 0);
Number2 = rng.Next(0, 2.147.483.647);
resut = Number1 + Number2;
However, I am 90% certain that will ruin the random distribution. My P&P RPG experience gave me some experience with dice chances and I know for a fact rolling 2 dice (or the same 2 times) will get you a very different result distribution than one specific die. If you do not need this random distribution, that is an option. But if you do not care too much about the distribution it is worth a check.
Option 3: Do you need the full range? Or do you just care for min and max to be in it?
I assume you are doing some form of testing and you need both Int.MaxValue and Int.MinValue to be in the range. But do you need every value in between as well, or could you do without one of them?
If you have to lose value, would you prefer loosing 4 rather then Int.MaxValue?
Number = rng.Next(Int.MinValue, Int.MaxValue);
if(Number > 3)
Number = Number +1;
code like this would get you every number between MinValue and MaxValue, except 4. But in most cases code that can deal with 3 and 5 can also deal with 4. There is no need to explicitly test 4.
Of course, that assumes 4 is not some important test number that has to be run (I avoided 1 and 0 for those reasons). You could also decide the number to "skip" Randomly:
skipAbleNumber = rng.Next(Int.MinValue +1, Int.MaxValue);
And then use > skipAbleNumber rather than > 4.
You can not use Random.Next() to achieve what you want, because you can not correspond sequence of N numbers to N+1 and not miss one :). Period.
But you can use Random.NextDouble(), which returns double result:
0 <= x < 1 aka [0, 1)
between 0, where [ is inclusive sign and ) exclusive
How do we correspond N numbers to [0, 1)?
You need to split [0, 1) to N equal segments:
[0, 1/N), [1/N, 2/N), ... [N-1/N, 1)
And here is where it becomes important that one border is inclusive and another is exclusive - all N segments are absolutely equal!
Here is my code: I made it as a simple console program.
class Program
{
private static Int64 _segmentsQty;
private static double _step;
private static Random _random = new Random();
static void Main()
{
InclusiveRandomPrep();
for (int i = 1; i < 20; i++)
{
Console.WriteLine(InclusiveRandom());
}
Console.ReadLine();
}
public static void InclusiveRandomPrep()
{
_segmentsQty = (Int64)int.MaxValue - int.MinValue;
_step = 1.0 / _segmentsQty;
}
public static int InclusiveRandom()
{
var randomDouble = _random.NextDouble();
var times = randomDouble / _step;
var result = (Int64)Math.Floor(times);
return (int)result + int.MinValue;
}
}
This method can give you a random integer within any integer limits. If the maximum limit is less than int.MaxValue, then it uses the ordinary Random.Next(Int32, Int32) but with adding 1 to upper limit to include its value. If not but with lower limit greater than int.MinValue, it lowers the lower limit with 1 to shift the range 1 less that add 1 to the result. Finally, if both limits are int.MinValue and int.MaxValue, it generates a random integer 'a' that is either 0 or 1 with 50% probability of each, then it generates two other integers, the first is between int.MinValue and -1 inclusive, 2147483648 values, and the second is between 0 and int.MaxValue inclusive , 2147483648 values also, and using them with the value of 'a' it select an integer with totally equal probability.
private int RandomInclusive(int min, int max)
{
if (max < int.MaxValue)
return Random.Next(min, max + 1);
if (min > int.MinValue)
return Random.Next(min - 1, max) + 1;
int a = Random.Next(2);
return Random.Next(int.MinValue, 0) * a + (Random.Next(-1, int.MaxValue) + 1) * (1 - a);
}
What about this?
using System;
public class Example
{
public static void Main()
{
Random rnd = new Random();
int min_value = max_value;
int max_value = min_value;
Console.WriteLine("\n20 random integers from 10 to 20:");
for (int ctr = 1; ctr <= 20; ctr++)
{
Console.Write("{0,6}", rnd.Next(min_value, max_value));
if (ctr % 5 == 0) Console.WriteLine();
}
}
}
You can try this. A bit hacky but can get you both min and max inclusive.
static void Main(string[] args)
{
int x = 0;
var r = new Random();
for (var i = 0; i < 32; i++)
x = x | (r.Next(0, 2) << i);
Console.WriteLine(x);
Console.ReadKey();
}
You can add 1 to generated number randomly so it still random and cover full range integer.
public static class RandomExtension
{
public static int NextInclusive(this Random random, int minValue, int maxValue)
{
var randInt = random.Next(minValue, maxValue);
var plus = random.Next(0, 2);
return randInt + plus;
}
}
Will this work for you?
int random(Random rnd, int min, int max)
{
return Convert.ToInt32(rnd.NextDouble() * (max - min) + min);
}

Detecting overflow in fixed-point multiplication

Short version: how can I detect overflow using the fixed-point multiplication described here but for a signed type?
Long version:
I still have some overflow issues with my Q31.32 fixed point type. To make it easier to work out examples on paper, I've made a much smaller type using the same algorithm, a Q3.4 based on sbyte. I figure that if I can work out all the kinks for a Q3.4 type, the same logic should apply for a Q31.32 one.
Note that I could very easily implement Q3.4 multiplication by performing it on a 16-bit integer, but I'm doing as if that didn't exist, because for the Q31.32 I'd need a 128-bit integer which doesn't exist (and BigInteger is too slow).
I want my multiplication to handle overflow by saturation, that is when overflow happens, the result is the highest or smallest value that can be represented depending on the sign of the operands.
This is basically how the type is represented:
struct Fix8 {
sbyte m_rawValue;
public static readonly Fix8 One = new Fix8(1 << 4);
public static readonly Fix8 MinValue = new Fix8(sbyte.MinValue);
public static readonly Fix8 MaxValue = new Fix8(sbyte.MaxValue);
Fix8(sbyte value) {
m_rawValue = value;
}
public static explicit operator decimal(Fix8 value) {
return (decimal)value.m_rawValue / One.m_rawValue;
}
public static explicit operator Fix8(decimal value) {
var nearestExact = Math.Round(value * 16m) * 0.0625m;
return new Fix8((sbyte)(nearestExact * One.m_rawValue));
}
}
And this is how I currently handle multiplication:
public static Fix8 operator *(Fix8 x, Fix8 y) {
sbyte xl = x.m_rawValue;
sbyte yl = y.m_rawValue;
// split x and y into their highest and lowest 4 bits
byte xlo = (byte)(xl & 0x0F);
sbyte xhi = (sbyte)(xl >> 4);
byte ylo = (byte)(yl & 0x0F);
sbyte yhi = (sbyte)(yl >> 4);
// perform cross-multiplications
byte lolo = (byte)(xlo * ylo);
sbyte lohi = (sbyte)((sbyte)xlo * yhi);
sbyte hilo = (sbyte)(xhi * (sbyte)ylo);
sbyte hihi = (sbyte)(xhi * yhi);
// shift results as appropriate
byte loResult = (byte)(lolo >> 4);
sbyte midResult1 = lohi;
sbyte midResult2 = hilo;
sbyte hiResult = (sbyte)(hihi << 4);
// add everything
sbyte sum = (sbyte)((sbyte)loResult + midResult1 + midResult2 + hiResult);
// if the top 4 bits of hihi (unused in the result) are neither all 0s or 1s,
// then this means the result overflowed.
sbyte topCarry = (sbyte)(hihi >> 4);
bool opSignsEqual = ((xl ^ yl) & sbyte.MinValue) == 0;
if (topCarry != 0 && topCarry != -1) {
return opSignsEqual ? MaxValue : MinValue;
}
// if signs of operands are equal and sign of result is negative,
// then multiplication overflowed upwards
// the reverse is also true
if (opSignsEqual) {
if (sum < 0) {
return MaxValue;
}
}
else {
if (sum > 0) {
return MinValue;
}
}
return new Fix8(sum);
}
This gives result accurate within the precision of the type and handles most overflow cases. It doesn't however handle these ones, for example:
Failed -8 * 2 : expected -8 but got 0
Failed 3.5 * 5 : expected 7,9375 but got 1,5
Let's work out how the multiplication happens for the first one.
-8 and 2 are represented as x = 0x80 and y = 0x20.
xlo = 0x80 & 0x0F = 0x00
xhi = 0x80 >> 4 = 0xf8
ylo = 0x20 & 0x0F = 0x00
yhi = 0x20 >> 4 = 0x02
lolo = xlo * ylo = 0x00
lohi = xlo * yhi = 0x00
hilo = xhi * ylo = 0x00
hihi = xhi * yhi = 0xf0
The sum is obviously 0 as all terms are 0 save for hihi, but only the lowest 4 bits of hihi are used in the final sum.
My usual overflow detection magic doesn't work here: the result is zero so the sign of the result is meaningless (e.g. 0.0625 * -0.0625 == 0 (by rounding down), 0 is positive yet signs of operands differ); also the high bits of hihi are 1111 which often happens even when there's no overflow.
Basically I don't know how to detect that overflow happened here. Is there a more general method?
You should examine hihi to see whether it contains any relevant bits outside the range of the result. You can also compare the highest bit of the result with the corresponding bit in hihi to see whether a carry propagated that far, and if it did (i.e. the bit changed), whether that indicates an overflow (i.e. the bit changed in the wrong direction). All of this would probably be easier to formulate if you were using one's complement notation, and treat the sign bits separately. But in that case, your example of −8 would be pointless.
Looking at your example, you have hihi = 0xf0.
hihi 11110000
result ±###.####
So in this case, if there were no overflow in hihi alone, then the first 5 bits would all be the same, and the sign of the result would match the sign of hihi. This is not the case here. You can check this using
if ((hihi & 0x08) * 0x1f != (hihi & 0xf8))
handle_overflow();
The carry into hihi can probably be detected most easily by adding the result one summand at a time and performing common overflow detection after each step. Haven't got a nice piece of code for that ready.
This took me a long time, but I eventually figured everything out. This code is tested to work for every possible combination of x and y in the range allowed by sbyte. Here is the commented code:
static sbyte AddOverflowHelper(sbyte x, sbyte y, ref bool overflow) {
var sum = (sbyte)(x + y);
// x + y overflows if sign(x) ^ sign(y) != sign(sum)
overflow |= ((x ^ y ^ sum) & sbyte.MinValue) != 0;
return sum;
}
/// <summary>
/// Multiplies two Fix8 numbers.
/// Deals with overflow by saturation.
/// </summary>
public static Fix8 operator *(Fix8 x, Fix8 y) {
// Using the cross-multiplication algorithm, for learning purposes.
// It would be both trivial and much faster to use an Int16, but this technique
// won't work for a Fix64, since there's no Int128 or equivalent (and BigInteger is too slow).
sbyte xl = x.m_rawValue;
sbyte yl = y.m_rawValue;
byte xlo = (byte)(xl & 0x0F);
sbyte xhi = (sbyte)(xl >> 4);
byte ylo = (byte)(yl & 0x0F);
sbyte yhi = (sbyte)(yl >> 4);
byte lolo = (byte)(xlo * ylo);
sbyte lohi = (sbyte)((sbyte)xlo * yhi);
sbyte hilo = (sbyte)(xhi * (sbyte)ylo);
sbyte hihi = (sbyte)(xhi * yhi);
byte loResult = (byte)(lolo >> 4);
sbyte midResult1 = lohi;
sbyte midResult2 = hilo;
sbyte hiResult = (sbyte)(hihi << 4);
bool overflow = false;
// Check for overflow at each step of the sum, if it happens overflow will be true
sbyte sum = AddOverflowHelper((sbyte)loResult, midResult1, ref overflow);
sum = AddOverflowHelper(sum, midResult2, ref overflow);
sum = AddOverflowHelper(sum, hiResult, ref overflow);
bool opSignsEqual = ((xl ^ yl) & sbyte.MinValue) == 0;
// if signs of operands are equal and sign of result is negative,
// then multiplication overflowed positively
// the reverse is also true
if (opSignsEqual) {
if (sum < 0 || (overflow && xl > 0)) {
return MaxValue;
}
}
else {
if (sum > 0) {
return MinValue;
}
// If signs differ, both operands' magnitudes are greater than 1,
// and the result is greater than the negative operand, then there was negative overflow.
sbyte posOp, negOp;
if (xl > yl) {
posOp = xl;
negOp = yl;
}
else {
posOp = yl;
negOp = xl;
}
if (sum > negOp && negOp < -(1 << 4) && posOp > (1 << 4)) {
return MinValue;
}
}
// if the top 4 bits of hihi (unused in the result) are neither all 0s nor 1s,
// then this means the result overflowed.
sbyte topCarry = (sbyte)(hihi >> 4);
// -17 (-1.0625) is a problematic value which never causes overflow but messes up the carry bits
if (topCarry != 0 && topCarry != -1 && xl != -17 && yl != -17) {
return opSignsEqual ? MaxValue : MinValue;
}
// Round up if necessary, but don't overflow
var lowCarry = (byte)(lolo << 4);
if (lowCarry >= 0x80 && sum < sbyte.MaxValue) {
++sum;
}
return new Fix8(sum);
}
I'm putting all this together into a properly unit tested fixed-point math library for .NET, which will be available here: https://github.com/asik/FixedMath.Net

Why does computing factorial of relatively small numbers (34+) return 0?

int n = Convert.ToInt32(Console.ReadLine());
int factorial = 1;
for (int i = 1; i <= n; i++)
{
factorial *= i;
}
Console.WriteLine(factorial);
This code runs in Console Application, but when a number is above 34 the application returns 0.
Why is 0 returned and what can be done to compute factorial of large numbers?
You're going out of range of what the variable can store. That's effectively a factorial, which grows faster than the exponential. Try using ulong (max value 2^64 = 18,446,744,073,709,551,615) instead of int (max value 2^31 = 2,147,483,647) - ulong p = 1 - that should get you a bit further.
If you need to go even further, .NET 4 and up has BigInteger, which can store arbitrarily large numbers.
You are getting 0 because of the way integer overflow handled in most programming languages. You can easily see what happens if you output results of each computation in a loop (using HEX representation):
int n = Convert.ToInt32(Console.ReadLine());
int factorial = 1;
for (int i = 1; i <= n; i++)
{
factorial *= i;
Console.WriteLine("{0:x}", factorial);
}
Console.WriteLine(factorial);
For n = 34 result look like:
1
2
6
18
78
2d0
13b0
...
2c000000
80000000
80000000
0
Basically multiplying by 2 shifts numbers left and when you multiplied numberer containing enough twos all significant digits will fall out of integer which is 32 bits wide (i.e. first 6 numbers give you 4 twos : 1, 2, 3, 2*2, 5, 2*3, so result of multipying them is 0x2d0 with 4 zero bits at the end).
If you are using .net 4.0 and want to calculate factorial of 1000, then try to use BigInteger instead of Int32 or Int64 or even UInt64. Your problem statement "doesn't work" is not quite sufficient for me to give some good subjection.
Your code will look something like:
using System;
using System.Numerics;
namespace ConsoleApplication1
{
class Program
{
static void Main()
{
int factorial = Convert.ToInt32(Console.ReadLine());
var result = CalculateFactorial(factorial);
Console.WriteLine(result);
Console.ReadLine();
}
private static BigInteger CalculateFactorial(int value)
{
BigInteger result = new BigInteger(1);
for (int i = 1; i <= value; i++)
{
result *= i;
}
return result;
}
}
}

Converting a int to a BCD byte array

I want to convert an int to a byte[2] array using BCD.
The int in question will come from DateTime representing the Year and must be converted to two bytes.
Is there any pre-made function that does this or can you give me a simple way of doing this?
example:
int year = 2010
would output:
byte[2]{0x20, 0x10};
static byte[] Year2Bcd(int year) {
if (year < 0 || year > 9999) throw new ArgumentException();
int bcd = 0;
for (int digit = 0; digit < 4; ++digit) {
int nibble = year % 10;
bcd |= nibble << (digit * 4);
year /= 10;
}
return new byte[] { (byte)((bcd >> 8) & 0xff), (byte)(bcd & 0xff) };
}
Beware that you asked for a big-endian result, that's a bit unusual.
Use this method.
public static byte[] ToBcd(int value){
if(value<0 || value>99999999)
throw new ArgumentOutOfRangeException("value");
byte[] ret=new byte[4];
for(int i=0;i<4;i++){
ret[i]=(byte)(value%10);
value/=10;
ret[i]|=(byte)((value%10)<<4);
value/=10;
}
return ret;
}
This is essentially how it works.
If the value is less than 0 or greater than 99999999, the value won't fit in four bytes. More formally, if the value is less than 0 or is 10^(n*2) or greater, where n is the number of bytes, the value won't fit in n bytes.
For each byte:
Set that byte to the remainder of the value-divided-by-10 to the byte. (This will place the last digit in the low nibble [half-byte] of the current byte.)
Divide the value by 10.
Add 16 times the remainder of the value-divided-by-10 to the byte. (This will place the now-last digit in the high nibble of the current byte.)
Divide the value by 10.
(One optimization is to set every byte to 0 beforehand -- which is implicitly done by .NET when it allocates a new array -- and to stop iterating when the value reaches 0. This latter optimization is not done in the code above, for simplicity. Also, if available, some compilers or assemblers offer a divide/remainder routine that allows retrieving the quotient and remainder in one division step, an optimization which is not usually necessary though.)
Here's a terrible brute-force version. I'm sure there's a better way than this, but it ought to work anyway.
int digitOne = year / 1000;
int digitTwo = (year - digitOne * 1000) / 100;
int digitThree = (year - digitOne * 1000 - digitTwo * 100) / 10;
int digitFour = year - digitOne * 1000 - digitTwo * 100 - digitThree * 10;
byte[] bcdYear = new byte[] { digitOne << 4 | digitTwo, digitThree << 4 | digitFour };
The sad part about it is that fast binary to BCD conversions are built into the x86 microprocessor architecture, if you could get at them!
Here is a slightly cleaner version then Jeffrey's
static byte[] IntToBCD(int input)
{
if (input > 9999 || input < 0)
throw new ArgumentOutOfRangeException("input");
int thousands = input / 1000;
int hundreds = (input -= thousands * 1000) / 100;
int tens = (input -= hundreds * 100) / 10;
int ones = (input -= tens * 10);
byte[] bcd = new byte[] {
(byte)(thousands << 4 | hundreds),
(byte)(tens << 4 | ones)
};
return bcd;
}
maybe a simple parse function containing this loop
i=0;
while (id>0)
{
twodigits=id%100; //need 2 digits per byte
arr[i]=twodigits%10 + twodigits/10*16; //first digit on first 4 bits second digit shifted with 4 bits
id/=100;
i++;
}
More common solution
private IEnumerable<Byte> GetBytes(Decimal value)
{
Byte currentByte = 0;
Boolean odd = true;
while (value > 0)
{
if (odd)
currentByte = 0;
Decimal rest = value % 10;
value = (value-rest)/10;
currentByte |= (Byte)(odd ? (Byte)rest : (Byte)((Byte)rest << 4));
if(!odd)
yield return currentByte;
odd = !odd;
}
if(!odd)
yield return currentByte;
}
Same version as Peter O. but in VB.NET
Public Shared Function ToBcd(ByVal pValue As Integer) As Byte()
If pValue < 0 OrElse pValue > 99999999 Then Throw New ArgumentOutOfRangeException("value")
Dim ret As Byte() = New Byte(3) {} 'All bytes are init with 0's
For i As Integer = 0 To 3
ret(i) = CByte(pValue Mod 10)
pValue = Math.Floor(pValue / 10.0)
ret(i) = ret(i) Or CByte((pValue Mod 10) << 4)
pValue = Math.Floor(pValue / 10.0)
If pValue = 0 Then Exit For
Next
Return ret
End Function
The trick here is to be aware that simply using pValue /= 10 will round the value so if for instance the argument is "16", the first part of the byte will be correct, but the result of the division will be 2 (as 1.6 will be rounded up). Therefore I use the Math.Floor method.
I made a generic routine posted at IntToByteArray that you could use like:
var yearInBytes = ConvertBigIntToBcd(2010, 2);
static byte[] IntToBCD(int input) {
byte[] bcd = new byte[] {
(byte)(input>> 8),
(byte)(input& 0x00FF)
};
return bcd;
}

Categories

Resources