Issue with Random.Next() not choosing random values - c#

As I understand it, Random.Next() uses the system time for getting the seed, but when iterating through a loop very fast the system time hasn't changed or has hardly changed, giving me the same "random" number. I'm attempting to use random to select starting positions to begin writing bytes for about 2 seconds of static here and there to music files, 30 different positions are selected, but they're almost the same. I'm getting almost continuous static from the beginning, broken about 3 times for just a few seconds before it resumes playing the music normally at around 30 seconds in; which isn't what I want, I need it spread out throughout the entire clip. "int pos" is the problem, its not random, each starting position is nearly identical to all the others, so I have a prolonged amount of static, not static randomly spread throughout the music. My randoms are also static.
FileStream stream = new FileStream(file, FileMode.Open, FileAccess.ReadWrite);
for (int i = 0; i < 20; i++)
{
int pos = rand2.Next(75000, 4000000 /*I've been too lazy to get the file length, so I'm using 4000000*/ );
for (int x = 0; x < 500000/*500000 is a little over 2 seconds of static*/; x++)
{
byte number = array[rand.Next(array.Length)];
stream.Position = pos;
pos++;
stream.WriteByte(number);
}
}
I assumed that it would take 5 seconds or so to make each write (on my slow CPU), which would be enough time for the next random to give me a position that isn't identical to or extremely close to the previous one. As it stands, each time I seem to be getting an initial position of about ~90000 (first few seconds of music); and all of the next ones are within 20 seconds of that. So my question is, what do I need to modify/do differently in order to achieve my desired result? I'd like a couple seconds of static scattered throughout the entire clip, not clustered together.
I have a byte array which stores my hex digits, which are randomly selected, that appears to work fine, its just the randomness of the writing positions isn't random at all, they're all in extreme proximity.
byte[] array = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1A, 0x1B, 0x1C, 0x1D, 0x1E, 0x1F };
Thanks.
P.S. I know I should have using blocks for the FileStream, I'll add it when I get around to it.

If 500000 bytes is about 2 seconds of audio, and you're starting somewhere between 75000 and 4000000, you're starting between 0.15s and 8s into the song. That explains why you're not hearing any static after about 10s into the song. Try using the actual file size (minus 500000) as the upper bound of the rand.Next call used to populate pos.

When you write :
int pos = rand2.Next(75000, 4000000)
If your file is stereo 16 bits 48KHz, then 4000000 / (2*2*48000) = 20 seconds
of sound. Use the file size, it's easy. You shouldn't be lazy :-) .
But if you 'Add' noise you shouldn't replace the file value, but rather add to
it a random value.
And why have a noise amplitude of only 16 / Byte ? and fixed ? using number = rand.Next(Amplitude)
would seem fine to compute the added noise.
Note also that you're creating two white noises, one on each channel.

First, Random.Next does not use system clock to generate a random number. Random number is generated on the basis of seed provided and the default constructor(parameterless) uses the system clock as seed but it only set the system clock as seed.
Correct me if I am wrong but it looks u r getting random numbers but these numbers are close enough to produce almost same sound and u want your numbers to be scattered with large margin(difference between consecutive random numbers). To achieve this I suggest you to use a minimum threshold. If difference between consecutive number is less than threshold either try to generate next one or adjust the value to satisfy the threshold. A code can better explain what I want to say.
int RandomNumber(Random rand,int? lastRandomNumber, int min, int max, int threshold)
{
var number = rand.Next(min, max);
if (lastRandomNumber.HasValue && Math.Abs(lastRandomNumber.Value - number) < threshold)
{
Console.WriteLine("Actual:{0}", number);
if (lastRandomNumber.Value < number)
{
number += threshold;
if (number > max)
number = max;
}
else
{
number -= threshold;
if (number < min)
number = min;
}
}
return number;
}

I had the same problem when generating random numbers . It was generating the same values. The following simple code solved my problem.
private static readonly Random random = new Random();
private static readonly object syncLock = new object();
public int RandomNumber(int min, int max)
{
lock (syncLock)
{ // synchronize
return random.Next(min, max);
}
}
You will use is like below:
int rn = RandomNumber(0, 10); //for example

Related

BitArray change bit within range

How can I ensure that when changing a bit from a BitArray, the BitArray value remains in a range.
Example:
Given the range [-5.12, 5.12] and
a = 0100000000000000011000100100110111010010111100011010100111111100 ( = 2.048)
By changing a bit at a random position, I need to ensure that the new value remains in the given range.
I'm not 100% sure what you are doing and this answer assumes you are storing a as a 64-bit value (long) currently. The following code may help point you in the right direction.
const double minValue = -5.12;
const double maxValue = 5.12;
var initialValue = Convert.ToInt64("100000000000000011000100100110111010010111100011010100111111100", 2);
var changedValue = ChangeRandomBit(initialValue); // However you're doing this
var changedValueAsDouble = BitConverter.Int64BitsToDouble(initialValue);
if ((changedValueAsDouble < minValue) || (changedValueAsDouble > maxValue))
{
// Do something
}
It looks like double (64 bits and result has decimal point).
As you may know it has sign bit, exponent and fraction, so you can not change random bit and still have value in the range, with some exceptions:
sign bit can be changed without problem if your range is [-x;+x] (same x);
changing exponent or fraction will require to check new value range but:
changing exponent of fraction bit from 1 to 0 will make |a| less.
I don't know what you are trying to achieve, care to share? Perhaps you are trying to validate or correct something, then you may have a look at this.
Here's an extension method that undoes the set bit if the new value of the float is outside the given range (this is an example only, it relies on the BitArray holding a float with no checks, which is pretty horrible so just hack a solution out of this, incl changing to double):
static class Extension
{
public static void SetFloat(this BitArray array, int index, bool value, float min, float max)
{
bool old = array.Get(index);
array.Set(index, value);
byte[] bytes = new byte[4];
array.CopyTo(bytes, 0);
float f = BitConverter.ToSingle(bytes, 0);
if (f < min || f > max)
array.Set(index, old);
}
}
Example use:
static void Main(string[] args)
{
float f = 2.1f;
byte[] bytes = System.BitConverter.GetBytes(f);
BitArray array = new BitArray(bytes);
array.Set(20, true, -5.12f, 5.12f);
}
If you can actually limit your precision, then this would be a lot easier. For example given the range:
[-5.12, 5.12]
If I multiply 5.12 by 100, I get
[-512, 512]
And the integer 512 in binary is, of course:
1000000000
So now you know you can set any of the first 9 bits and you'll be < 512 if the 10th bit is 0. If you set the 10th bit, you will have to set all the other bits to 0. With a little extra effort, this can be extended to deal with 2's complement negative values too (although, I might be inclined just to convert them to positive values)
Now if you actually need to accommodate the 3 d.p. of 2.048, then you'll need to multiply all you values by 1000 instead and it will be a little more difficult because 5120 in binary is 1010000000000
You know you can do anything you want with everything except the most significant bit (MSB) if the MSB is 0. In this case, if the MSB is 1, but the next 2 bits are 0, you can do anything you want with the remaining bits.
The logic involved with dealing directly with the number in IEEE-754 floating point format is probably going to be torturous.
Or you could just go with the "mutate the value and then test it" approach, if it's out-of-range, go back and try again. Which might be suitable (in practice), but won't be guaranteed to exit.
A final thought, depending on exactly what you are doing, you might want to also look at Gray Codes. The idea of a Gray Code is to make it such that each value is only 1 bit flip apart. With naturally encoded binary, a flip of the MSB has orders of magnitude more impact on the final value than a flip of the LSB.

Generate a Random Int32 for the full range of possible numbers

If I wanted to generate a random number for all possible numbers an Int32 could contain would the following code be a reasonable way of doing so? Is there any reason why it may not be a good idea? (ie. a uniform distribution at least as good as Random.Next() itself anyway)
public static int NextInt(Random Rnd) //-2,147,483,648 to 2,147,483,647
{
int AnInt;
AnInt = Rnd.Next(System.Int32.MinValue, System.Int32.MaxValue);
AnInt += Rnd.Next(2);
return AnInt;
}
You could use Random.NextBytes to obtain 4 bytes, then use BitConverter.ToInt32 to convert those to an int.
Something like:
byte[] buf = new byte[4];
Rnd.NextBytes(buf);
int i = BitConverter.ToInt32(buf,0);
Your proposed solution will slightly skew the distribution. The minValue and maxValue will occur less frequently than the interior values. As an example, assume that int has a MinValue of -2 and a MaxValue of 1. Here are the possible initial values, with each followed by the resulting values after the Random(2):
-2: -2 -1
-1: -1 0
0: 0 1
half of the negative -2 values will get modified up to -1, and only half of 0 will get modified up to 1. So the values -2 and 1 will occur less frequently than -1 and 0.
Damien's solution is good. Another choice would be:
if (Random(2) == 0) {
return Random(int.MinValue, 0);
} else {
return 1 + Random(-1, int.MaxValue);
}
another solution, similar to Damiens approach, and faster than the previous one would be
int i = r.Next(ushort.MinValue, ushort.MaxValue + 1) << 16;
i |= r.Next(ushort.MinValue, ushort.MaxValue + 1);
A uniform distribution does not mean you get each number exactly once. For that you need a permutation
Now, if you need a random permutation of all 4-billion numbers you're a bit stuck. .NET does not allow objects to be larger than 2GBs. You can work around that, but I assume that's not really what you need.
If you less numbers (say, 100, or 5 million, less than a few billions) without repetitions, you should do this:
Maintain a set of integers, starting empty. Choose a random number. If it's already in the set, choose another random number. If it's not in the set, add it and return it.
That way you guarantee each number will be returned only once.
I have a class where I get random bytes into a 8KB buffer and distribute numbers from by converting them from the random bytes. This gives you the full int distribution. The 8KB buffer is used to you do not need to call NextBytes for every new random byte[].
// Get 4 bytes from the random buffer and cast to int (all numbers equally this way
public int GetRandomInt()
{
CheckBuf(sizeof(int));
return BitConverter.ToInt32(_buf, _idx);
}
// Get bytes for your buffer. Both random class and cryptoAPI support this
protected override void GetNewBuf(byte[] buf)
{
_rnd.NextBytes(buf);
}
// cyrptoAPI does better random numbers but is slower
public StrongRandomNumberGenerator()
{
_rnd = new RNGCryptoServiceProvider();
}

Generating uniform random integers with a certain maximum

I want to generate uniform integers that satisfy 0 <= result <= maxValue.
I already have a generator that returns uniform values in the full range of the built in unsigned integer types. Let's call the methods for this byte Byte(), ushort UInt16(), uint UInt32() and ulong UInt64(). Assume that the result of these methods is perfectly uniform.
The signature of the methods I want are uint UniformUInt(uint maxValue) and ulong UniformUInt(ulong maxValue).
What I'm looking for:
Correctness
I'd prefer the return values to be distributed in the given interval.
But a very small bias is acceptable if it increases performance significantly. By that I mean a bias of an order that allows distinguisher with probability 2/3 given 2^64 values.
It must work correctly for any maxValue.
Performance
The method should be fast.
Efficiency
The method does consume little raw randomness, since depending on the underlying generator, generating the raw bytes might be costly. Wasting a few bits is fine, but consuming say 128 bits to generate a single number is probably excessive.
It's also possible to cache some left over randomness from the previous call in some member variables.
Be careful with int overflows, and wrapping behavior.
I already have a solution(I'll post it as an answer), but it's a bit ugly for my tastes. So I'd like to get ideas for better solutions.
Suggestions on how to unit test with large maxValues would be nice too, since I can't generate a histogram with 2^64 buckets and 2^74 random values. Another complication is that with certain bugs, only some maxValue distributions are biased a lot, and others only very slightly.
How about something like this as a general-purpose solution? The algorithm is based on that used by Java's nextInt method, rejecting any values that would cause a non-uniform distribution. So long as the output of your UInt32 method is perfectly uniform then this should be too.
uint UniformUInt(uint inclusiveMaxValue)
{
unchecked
{
uint exclusiveMaxValue = inclusiveMaxValue + 1;
// if exclusiveMaxValue is a power of two then we can just use a mask
// also handles the edge case where inclusiveMaxValue is uint.MaxValue
if ((exclusiveMaxValue & (~exclusiveMaxValue + 1)) == exclusiveMaxValue)
return UInt32() & inclusiveMaxValue;
uint bits, val;
do
{
bits = UInt32();
val = bits % exclusiveMaxValue;
// if (bits - val + inclusiveMaxValue) overflows then val has been
// taken from an incomplete chunk at the end of the range of bits
// in that case we reject it and loop again
} while (bits - val + inclusiveMaxValue < inclusiveMaxValue);
return val;
}
}
The rejection process could, theoretically, keep looping forever; in practice the performance should be pretty good. It's difficult to suggest any generally applicable optimisations without knowing (a) the expected usage patterns, and (b) the performance characteristics of your underlying RNG.
For example, if most callers will be specifying a max value <= 255 then it might not make sense to ask for four bytes of randomness every time. On the other hand, the performance benefit of requesting fewer bytes might be outweighed by the additional cost of always checking how many you actually need. (And, of course, once you do have specific information then you can keep optimising and testing until your results are good enough.)
I am not sure, that his is an answer. It definitly needs more space than a comment, so I have to write it here, but I am willing to delete if others think this is stupid.
From the OQ I get, that
Entropy bits are very expensive
Everything else should be considered expensive, but less so than entropy.
My idea is to use binary digits to half, quater ... the maxValue space, until it is reduced to a number. Somthing like
I'l use maxValue=333 (decimal) as an example and assume a function getBit(), that randomly returns 0 or 1
offset:=0
space:=maxValue
while (space>0)
//Right-shift the value, keeping the rightmost bit this should be
//efficient on x86 and x64, if coded in real code, not pseudocode
remains:=space & 1
part:=floor(space/2)
space:=part
//In the 333 example, part is now 166, but 2*166=332 If we were to simply chose one
//half of the space, we would be heavily biased towards the upper half, so in case
//we have a remains, we consume a bit of entropy to decide which half is bigger
if (remains)
if(getBit())
part++;
//Now we decide which half to chose, consuming a bit of entropy
if (getBit())
offset+=part;
//Exit condition: The remeinind number space=0 is guaranteed to be met
//In the 333 example, offset will be 0, 166 or 167, remaining space will be 166
}
randomResult:=offset
getBit() can either come from your entropy source, if it is bit-based, or by consuming n bits of entropy at once on first call (obviously with n being the optimum for your entropy source), and shifting this until empty.
My current solution. A bit ugly for my tastes. It also has two divisions per generated number, which might negatively impact performance (I haven't profiled this part yet).
uint UniformUInt(uint maxResult)
{
uint rand;
uint count = maxResult + 1;
if (maxResult < 0x100)
{
uint usefulCount = (0x100 / count) * count;
do
{
rand = Byte();
} while (rand >= usefulCount);
return rand % count;
}
else if (maxResult < 0x10000)
{
uint usefulCount = (0x10000 / count) * count;
do
{
rand = UInt16();
} while (rand >= usefulCount);
return rand % count;
}
else if (maxResult != uint.MaxValue)
{
uint usefulCount = (uint.MaxValue / count) * count;//reduces upper bound by 1, to avoid long division
do
{
rand = UInt32();
} while (rand >= usefulCount);
return rand % count;
}
else
{
return UInt32();
}
}
ulong UniformUInt(ulong maxResult)
{
if (maxResult < 0x100000000)
return InternalUniformUInt((uint)maxResult);
else if (maxResult < ulong.MaxValue)
{
ulong rand;
ulong count = maxResult + 1;
ulong usefulCount = (ulong.MaxValue / count) * count;//reduces upper bound by 1, since ulong can't represent any more
do
{
rand = UInt64();
} while (rand >= usefulCount);
return rand % count;
}
else
return UInt64();
}

Random.Next returns always the same values [duplicate]

This question already has answers here:
Random number generator only generating one random number
(15 answers)
Closed 9 years ago.
I use the method to generate unique number but I always get the same number -2147483648. Even if I stop the program, recompile it and run again I still see the same number.
public static int GetRandomInt(int length)
{
var min = Math.Pow(10, length - 1);
var max = Math.Pow(10, length) - 1;
var random = new Random();
return random.Next((int)min, (int)max);
}
Try externalizing the random instance:
private readonly Random _random = new Random();
public static int GetRandomInt(int length)
{
var min = Math.Pow(10, length - 1);
var max = Math.Pow(10, length) - 1;
return _random.Next((int)min, (int)max);
}
This is not an issue of not reusing Random instance, the results he gets should be random on multiple starts, not always being -(2^32)
This is the issue with length being too big, and casting powers of length to int. If you break the code into following lines:
var min = Math.Pow(10, length - 1);
var max = Math.Pow(10, length) - 1;
var random = new Random();
var a = (int)min;
var b = (int)max;
return random.Next(a, b);
You'll see that a and b are -2147483648, making that the only possible result of Next(min, max) (the doc specifies if min==max, return min).
The largest length you can safely use with this method is 9. For a length of 10 you'll get System.ArgumentOutOfRangeException, for length > 10 you'll get the -2147483648 result.
You have three problems with your code.
You should externalize your random variable.
You have a problem with truncation error.
The range between min and max is way to large.
The first problem is because you may not have enough time to advance the seed when reinitializing your random variable. The second error comes from truncating your (what would b very large) numbers down to ints. Finally, your biggest problem is your range between your min and your max. Consider finding the range between min and max (as defined in your code) with inputs 1->20:
length max-min
1 8
2 89
3 899
4 8999
5 89999
6 899999
7 8999999
8 89999999
9 899999999
10 8,999,999,999
11 89999999999
12 899999999999
13 8999999999999
14 89999999999999
15 899999999999999
16 9E+15
17 9E+16
18 9E+17
19 9E+18
And keep in mind that the maximum integer is 2,147,483,647, which is passed on any number greater than 9.
You should keep an instance of Random and not new() it up all the time, that should give you better results.
Also check for what length actually is. It may be giving you funny results as to the limits.
I think the problem is the calculation of min and max. They will be greater than Int32.MaxValue pretty fast...
In your class, have one instance of Random, e.g.:
public class MyClass
{
private readonly Random random = new Random();
public static int GetRandomInt(int length)
{
var min = Math.Pow(10, length - 1);
var max = Math.Pow(10, length) - 1;
return random.Next((int)min, (int)max);
}
}
The fact that random always returns the same values only exists for testing purposes.
Random classes usually use a seed to initialize themselves, and will usually return the same sequence provided the seed is the same one :
Always reuse the same Random() instance instead of recreating one over and over again
if you want unpredictable results, use a time-dependent seed rather than an hard-coded one
It's very difficult to code a truly random number generator. Most methods use external entropy generators (such as mouse movement, cpu temperature, or even complex physical mechanisms such as helium balloons colliding one another...).
The Random instance should be created only once and then reused. The reason for this is that the RNG is by default seeded with the current system time. If you rapidly create new Random instances (and pull one value from it) then many of them will be seeded with the same timestamp, because the loop probably executes faster than the system clock advances.
Remember, a RNG initialized by seed A will always return sequence B. So if you create three Random instances all seeded with for example 123, these three instances will always return the same number on the same iteration.

Need a way to pick a common bit in two bitmasks at random

Imagine two bitmasks, I'll just use 8 bits for simplicity:
01101010
10111011
The 2nd, 4th, and 6th bits are both 1. I want to pick one of those common "on" bits at random. But I want to do this in O(1).
The only way I've found to do this so far is pick a random "on" bit in one, then check the other to see if it's also on, then repeat until I find a match. This is still O(n), and in my case the majority of the bits are off in both masks. I do of course & them together to initially check if there's any common bits at all.
Is there a way to do this? If so, I can increase the speed of my function by around 6%. I'm using C# if that matters. Thanks!
Mike
If you are willing to have an O(lg n) solution, at the cost of a possibly nonuniform probability, recursively half split, i.e. and with the top half of the bits set and the bottom half set. If both are nonzero then chose one randomly, else choose the nonzero one. Then half split what remains, etc. This will take 10 comparisons for a 32 bit number, maybe not as few as you would like, but better than 32.
You can save a few ands by choosing to and with the high half or low half at random, and if there are no hits taking the other half, and if there are hits taking the half tested.
The random number only needs to be generated once, as you are only using one bit at each test, just shift the used bit out when you are done with it.
If you have lots of bits, this will be more efficient. I do not see how you can get this down to O(1) though.
For example, if you have a 32 bit number first and the anded combination with either 0xffff0000 or 0x0000ffff if the result is nonzero (say you anded with 0xffff0000) conitinue on with 0xff000000 of 0x00ff0000, and so on till you get to one bit. This ends up being a lot of tedious code. 32 bits takes 5 layers of code.
Do you want a uniform random distribution? If so, I don't see any good way around counting the bits and then selecting one at random, or selecting random bits until you hit one that is set.
If you don't care about uniform, you can select a set bit out of a word randomly with:
unsigned int pick_random(unsigned int w, int size) {
int bitpos = rng() % size;
unsigned int mask = ~((1U << bitpos) - 1);
if (mask & w)
w &= mask;
return w - (w & (w-1));
}
where rng() is your random number generator, w is the word you want to pick from, and size is the relevant size of the word in bits (which may be the machine wordsize, or may be less as long as you don't set the upper bits of the word. Then, for your example, you use pick_random(0x6a & 0xbb, 8) or whatever values you like.
This function uniformly randomly selects one bit which is high in both masks. If there are
no possible bits to pick, zero is returned instead. The running time is O(n), where n is the number of high bits in the anded masks. So if you have a low number of high bits in your masks, this function could be faster even though the worst case is O(n) which happens when all the bits are high. The implementation in C is as follows:
unsigned int randomMasksBit(unsigned a, unsigned b){
unsigned int i = a & b; // Calculate the bits which are high in both masks.
unsigned int count = 0
unsigned int randomBit = 0;
while (i){ // Loop through all high bits.
count++;
// Randomly pick one bit from the bit stream uniformly, by selecting
// a random floating point number between 0 and 1 and checking if it
// is less then the probability needed for random selection.
if ((rand() / (double)RAND_MAX) < (1 / (double)count)) randomBit = i & -i;
i &= i - 1; // Move on to the next high bit.
}
return randomBit;
}
O(1) with uniform distribution (or as uniform as random generator offers) can be done, depending on whether you count certain mathematical operation as O(1). As a rule we would, though in the case of bit-tweaking one might make a case that they are not.
The trick is that while it's easy enough to get the lowest set bit and to get the highest set bit, in order to have uniform distribution we need to randomly pick a partitioning point, and then randomly pick whether we'll go for the highest bit below it or the lowest bit above (trying the other approach if that returns zero).
I've broken this down a bit more than might be usual to allow the steps to be more easily followed. The only question on constant timing I can see is whether Math.Pow and Math.Log should be considered O(1).
Hence:
public static uint FindRandomSharedBit(uint x, uint y)
{//and two nums together, to find shared bits.
return FindRandomBit(x & y);
}
public static uint FindRandomBit(uint val)
{//if there's none, we can escape out quickly.
if(val == 0)
return 0;
Random rnd = new Random();
//pick a partition point. Note that Random.Next(1, 32) is in range 1 to 31
int maskPoint = rnd.Next(1, 32);
//pick which to try first.
bool tryLowFirst = rnd.Next(0, 2) == 1;
// will turn off all bits above our partition point.
uint lowerMask = Convert.ToUInt32(Math.Pow(2, maskPoint) - 1);
//will turn off all bits below our partition point
uint higherMask = ~lowerMask;
if(tryLowFirst)
{
uint lowRes = FindLowestBit(val & higherMask);
return lowRes != 0 ? lowRes : FindHighestBit(val & lowerMask);
}
uint hiRes = FindHighestBit(val & lowerMask);
return hiRes != 0 ? hiRes : FindLowestBit(val & higherMask);
}
public static uint FindLowestBit(uint masked)
{ //e.g 00100100
uint minusOne = masked - 1; //e.g. 00100011
uint xord = masked ^ minusOne; //e.g. 00000111
uint plusOne = xord + 1; //e.g. 00001000
return plusOne >> 1; //e.g. 00000100
}
public static uint FindHighestBit(uint masked)
{
double db = masked;
return (uint)Math.Pow(2, Math.Floor(Math.Log(masked, 2)));
}
I believe that, if you want uniform, then the answer will have to be Theta(n) in terms of the number of bits, if it has to work for all possible combinations.
The following C++ snippet (stolen) should be able to check if any given num is a power of 2.
if (!var || (var & (var - 1))) {
printf("%u is not power of 2\n", var);
}
else {
printf("%u is power of 2\n", var);
}
If you have few enough bits to worry about, you can get O(1) using a lookup table:
var lookup8bits = new int[256][] = {
new [] {},
new [] {0},
new [] {1},
new [] {0, 1},
...
new [] {0, 1, 2, 3, 4, 5, 6, 7}
};
Failing that, you can find the least significant bit of a number x with (x & -x), assuming 2s complement. For example, if x = 46 = 101110b, then -x = 111...111010010b, hence x & -x = 10.
You can use this technique to enumerate the set bits of x in O(n) time, where n is the number of set bits in x.
Note that computing a pseudo random number is going to take you a lot longer than enumerating the set bits in x!
This can't be done in O(1), and any solution for a fixed number of N bits (unless it's totally really ridiculously stupid) will have a constant upper bound, for that N.

Categories

Resources