This question already has answers here:
Count number of bits in a 64-bit (long, big) integer?
(3 answers)
Closed 9 years ago.
Possible duplicate: Count number of bits in a 64-bit (long, big)
integer?
For an image comparison algorithm I get a 64bit number as result. The amount of 1s in the number (ulong) (101011011100...) tells me how similar two images are, so I need to count them. How would I best do this in C#?
I'd like to use this in a WinRT & Windows Phone App, so I'm also looking for a low-cost method.
EDIT: As I have to count the bits for a large number of Images, I'm wondering if the lookup-table-approach might be best. But I'm not really sure how that works...
The Sean Eron Anderson's Bit Twiddling Hacks has this trick, among others:
Counting bits set, in parallel
unsigned int v; // count bits set in this (32-bit value)
unsigned int c; // store the total here
static const int S[] = {1, 2, 4, 8, 16}; // Magic Binary Numbers
static const int B[] = {0x55555555, 0x33333333, 0x0F0F0F0F, 0x00FF00FF, 0x0000FFFF};
c = v - ((v >> 1) & B[0]);
c = ((c >> S[1]) & B[1]) + (c & B[1]);
c = ((c >> S[2]) + c) & B[2];
c = ((c >> S[3]) + c) & B[3];
c = ((c >> S[4]) + c) & B[4];
The B array, expressed as binary, is:
B[0] = 0x55555555 = 01010101 01010101 01010101 01010101
B[1] = 0x33333333 = 00110011 00110011 00110011 00110011
B[2] = 0x0F0F0F0F = 00001111 00001111 00001111 00001111
B[3] = 0x00FF00FF = 00000000 11111111 00000000 11111111
B[4] = 0x0000FFFF = 00000000 00000000 11111111 11111111
We can adjust the method for larger integer sizes by continuing with the patterns for the Binary Magic Numbers, B and S. If there are k bits, then we need the arrays S and B to be ceil(lg(k)) elements long, and we must compute the same number of expressions for c as S or B are long. For a 32-bit v, 16 operations are used.
The best method for counting bits in a 32-bit integer v is the following:
v = v - ((v >> 1) & 0x55555555); // reuse input as temporary
v = (v & 0x33333333) + ((v >> 2) & 0x33333333); // temp
c = ((v + (v >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; // count
The best bit counting method takes only 12 operations, which is the same as the lookup-table method, but avoids the memory and potential cache misses of a table. It is a hybrid between the purely parallel method above and the earlier methods using multiplies (in the section on counting bits with 64-bit instructions), though it doesn't use 64-bit instructions. The counts of bits set in the bytes is done in parallel, and the sum total of the bits set in the bytes is computed by multiplying by 0x1010101 and shifting right 24 bits.
A generalization of the best bit counting method to integers of bit-widths upto 128 (parameterized by type T) is this:
v = v - ((v >> 1) & (T)~(T)0/3); // temp
v = (v & (T)~(T)0/15*3) + ((v >> 2) & (T)~(T)0/15*3); // temp
v = (v + (v >> 4)) & (T)~(T)0/255*15; // temp
c = (T)(v * ((T)~(T)0/255)) >> (sizeof(T) - 1) * CHAR_BIT; // count
Something along these lines would do (note that this isn't tested code, I just wrote it here, so it may and probably will require tweaking).
int numberOfOnes = 0;
for (int i = 63; i >= 0; i--)
{
if ((yourUInt64 >> i) & 1 == 1) numberOfOnes++;
else continue;
}
Option 1 - less iterations if 64bit result < 2^63:
byte numOfOnes;
while (result != 0)
{
numOfOnes += (result & 0x1);
result = (result >> 1);
}
return numOfOnes;
Option 2 - constant number of interations - can use loop unrolling:
byte NumOfOnes;
for (int i = 0; i < 64; i++)
{
numOfOnes += (result & 0x1);
result = (result >> 1);
}
this is a 32-bit version of BitCount, you could easily extend this to 64-bit version by add one more right shift by 32, and it would be very efficient.
int bitCount(int x) {
/* first let res = x&0xAAAAAAAA >> 1 + x&55555555
* after that the (2k)th and (2k+1)th bits of the res
* will be the number of 1s that contained by the (2k)th
* and (2k+1)th bits of x
* we can use a similar way to caculate the number of 1s
* that contained by the (4k)th and (4k+1)th and (4k+2)th
* and (4k+3)th bits of x, so as 8, 16, 32
*/
int varA = (85 << 8) | 85;
varA = (varA << 16) | varA;
int res = ((x>>1) & varA) + (x & varA);
varA = (51 << 8) | 51;
varA = (varA << 16) | varA;
res = ((res>>2) & varA) + (res & varA);
varA = (15 << 8) | 15;
varA = (varA << 16) | varA;
res = ((res>>4) & varA) + (res & varA);
varA = (255 << 16) | 255;
res = ((res>>8) & varA) + (res & varA);
varA = (255 << 8) | 255;
res = ((res>>16) & varA) + (res & varA);
return res;
}
Related
I'm packing some binary data as a short, but want to have 4x values of 0-F.. And would like to do this without having a bunch of switch() cases reading the string.split of a hex
Someone have a clever, elegant solution for this or should I just long-hand it?
eg; 1C4A = (1, 12, 4, 10)
Shift in and out
var a = 1;
var b = 12;
var c = 4;
var d = 10;
// in
var packed = (short) ((a << 12) | (b << 8) | (c << 4) | d);
// out
a = (packed >> 12) & 0xf;
b = (packed >> 8) & 0xf;
c = (packed >> 4) & 0xf;
d = packed & 0xF;
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
Console.WriteLine(d);
Output
1
12
4
10
You can shift by 4 (or divide and multiply by 16) to move numbers into different place values. Then mask and shift your packed number to get your original numbers back.
Eg if you want to store 1 and 2 you could do:
int packed = (1 << 4) + 2;
int v1 = (packed & 0xF0) >> 4;
int v2 = packed & 0x0F;
Console.WriteLine($"{v1}, {v2}");
>>> 1, 2
This question already has answers here:
How can I store 4 8 bit coordinates into one integer (C#)?
(4 answers)
Closed 2 years ago.
Lets say that I have the following four ints:
int a = 4;
int b = 5;
int c = 6;
int d = 7;
I want to store these values in one int:
int whole;
How would I do this using bitwise / shift operations? I tried:
int whole = a + (b << 8) + (c << 16) + (d << 24);
But I'm not sure if this will create unique values for whole, because I also want to retrieve the ints back from whole. So if I have, for example, whole = 5919835 I want to get the value of c back.
You can compress a, b, c and d into a single int if all a, b, c and d are all in a range of [0..255] i.e. if they are in fact bytes:
int whole = unchecked(a | (b << 8) | (c << 16) | (d << 24));
note unchecked (when d > 127 you can have integer overflow since int is signed integer). Technically, + will do, but | (bitwise or) seems to be more readable.
Reverse:
a = whole & 0xFF;
b = (whole >> 8) & 0xFF;
c = (whole >> 16) & 0xFF;
d = (whole >> 24) & 0xFF;
I have a working implementation to calculate modulo 7 of a 32 bit unsigned int but I'm having trouble with the 64 bit implementation. The 32 bit implementations were from this blog post (with a few bug fixes). I was able to get 64 bit versions working for modulo 3, 5, 15, and 6 but not 7. The math is a little over my head.
For reference, here is a gist with the full code.
Here's the working 32 bit:
static public uint Mersenne7(uint a)
{
a = (a >> 24) + (a & 0xFFFFFF); // sum base 2**24 digits
a = (a >> 12) + (a & 0xFFF); // sum base 2**12 digits
a = (a >> 6) + (a & 0x3F); // sum base 2**6 digits
a = (a >> 3) + (a & 0x7); // sum base 2**2 digits
a = (a >> 2) + (a & 0x7); // sum base 2**2 digits
if (a > 5) a = a - 6;
return a;
}
I made what seemed like the obvious extension, which worked for modulo 3, 5, and 15, but for mod 7 the results are all over the place with no obvious pattern (except the results all are under 7):
static public ulong Mersenne7(ulong a)
{
a = (a >> 48) + (a & 0xFFFFFFFFFFFF); // sum base 2**48 digits
a = (a >> 24) + (a & 0xFFFFFF); // sum base 2**24 digits
a = (a >> 12) + (a & 0xFFF); // sum base 2**12 digits
a = (a >> 6) + (a & 0x3F); // sum base 2**6 digits
a = (a >> 3) + (a & 0x7); // sum base 2**2 digits
a = (a >> 2) + (a & 0x7); // sum base 2**2 digits
if (a > 5) a = a - 6;
return a;
}
The same technique for 64 bits obviously doesn't work for mod 7. I've been trying some variations but I don't get anything noticeably better and I'm not sure how to work through it systematically.
I have benchmarked and shown that calculating modulo using shift and add for Mersenne numbers is faster than the built in modulus operator in my environment, and this is running in a tight loop in a hot path (index to a static size circular buffer). These lower value divisors are also more common than larger buffer sizes.
The maths behind this is actually pretty simple.
(Note for the maths part I'm using a^b to mean "a to the power b" not
"a xor b". These math parts are not supposed to be C# code)
The key trick is that you split a into two pieces so that
a = b * 2^3 + c
where b = a / 2^3 = a >> 3 and c = a mod 2^3 = a & 0x7
Then
a mod 7 = ((b mod 7) * (2^3 mod 7) + c ) mod 7
but 2^3 mod 7 = 1 so
a mod 7 = ( b mod 7 + c ) mod 7 = (b + c) mod 7
We apply this trick several times using
1 = 2^3 mod 7 = 2^6 mod 7 = 2^12 mod 7 = 2^24 mod 7 = 2^48 mod 7
With this in mind it looks like your "working" Mersene7 doesn't work.
I think this:
static public uint Mersenne7(uint a)
{
a = (a >> 24) + (a & 0xFFFFFF); // sum base 2**24 digits
a = (a >> 12) + (a & 0xFFF); // sum base 2**12 digits
a = (a >> 6) + (a & 0x3F); // sum base 2**6 digits
a = (a >> 3) + (a & 0x7); // sum base 2**2 digits
a = (a >> 2) + (a & 0x7); // sum base 2**2 digits
if (a > 5) a = a - 6;
return a;
}
should be
static public uint Mersenne7(uint a)
{
a = (a >> 24) + (a & 0xFFFFFF); // sum base 2**24 digits
a = (a >> 12) + (a & 0xFFF); // sum base 2**12 digits
a = (a >> 6) + (a & 0x3F); // sum base 2**6 digits
a = (a >> 3) + (a & 0x7); // sum base 2**3 digits
a = (a >> 3) + (a & 0x7); // sum base 2**3 digits
if (a >= 7) a = a - 7;
return a;
}
Note the change in the value in the final comparison, and the removal of the final sum line.
With these changes both the unit and ulong versions should produce correct results I think. (Haven't tested though)
I've duplicated the second contraction - I'm not sure if it is actually needed. (It is to handle overflow - but that may not occur you'll need to try some values to check)
In the ulong case you will need the a=(a>>48) + a & 0xFFFFFFFFFFFFL line like you have already implemented.
I've found method which implements Adler32 algorithm in C# and I would like to use it, but I do not understand part of the code:
Can someone explain me:
1) why bit operators are used when sum1, and sum2 are initialized
2) why sum2 is shifted ?
Adler32 on wiki https://en.wikipedia.org/wiki/Adler-32
& operator explanation:
(Binary AND Operator copies a bit to the result if it exists in both operands)
private bool MakeForBuffer(byte[] bytesBuff, uint adlerCheckSum)
{
if (Object.Equals(bytesBuff, null))
{
checksumValue = 0;
return false;
}
int nSize = bytesBuff.GetLength(0);
if (nSize == 0)
{
checksumValue = 0;
return false;
}
uint sum1 = adlerCheckSum & 0xFFFF; // 1) why bit operator is used?
uint sum2 = (adlerCheckSum >> 16) & 0xFFFF; // 2) why bit operator is used? , why is it shifted?
for (int i = 0; i < nSize; i++)
{
sum1 = (sum1 + bytesBuff[i]) % adlerBase;
sum2 = (sum1 + sum2) % adlerBase;
}
checksumValue = (sum2 << 16) + sum1;
return true;
}
1) why bit operator is used?
& 0xFFFF sets the two high bytes of the checksum to 0, so sum1 is simply the lower 16 bits of the checksum.
2) why bit operator is used? , why is it shifted?
adlerCheckSum >> 16 shifts the 16 higher bytes down to the lower 16 bytes, & 0xFFFF does the same as in the first step - it sets the 16 high bits to 0.
Example
adlerChecksum = 0x12345678
adlerChecksum & 0xFFFF = 0x00005678
adlerChecksum >> 16 = 0x????1234
(it should be 0x00001234 in C# but other languages / compilers "wrap the bits around" and you would get 0x56781234)
(adlerChecksum >> 16) & 0xFFFF = 0x00001234 now you can be sure it's 0x1234, this step is just a precaution that's probably unnecessary in C#.
adlerChecksum = 0x12345678
sum1 = 0x00005678
sum2 = 0x00001234
Those two operations combined simply split the UInt32 checksum into two UInt16.
From the adler32 Tag-Wiki:
Adler-32 is a fast checksum algorithm used in zlib to verify the results of decompression. It is composed of two sums modulo 65521. Start with s1 = 1 and s2 = 0, then for each byte x, s1 = s1 + x, s2 = s2 + s1. The two sums are combined into a 32-bit value with s1 in the low 16 bits and s2 in the high 16 bits.
For example, if it is 0xc0, then the result is 0x40,
Since 0xc0 is equal to binary 11000000, the result should be 01000000.
public static byte Puzzle(byte x) {
byte result = 0;
byte[] masks = new byte[]{1,2,4,8,16,32,64,128};
foreach(var mask in masks)
{
if((x&mask)!=0)
{
return mask;
}
}
return 0;
}
This is my current solution. It turns out this question can be solved in 3-4 lines...
public static byte Puzzle(byte x) {
return (byte) (x & (~x ^ -x));
}
if x is bbbb1000, ~x is BBBB0111 (B is !b)
-x is really ~x+1, (2's complement) so adding 1 to BBBB0111 is BBBB1000
~x ^ -x then is 00001111 & with x gives the lowest 1 bit.
A better answer was supplied by harold in the comments
public static byte Puzzle(byte x) {
return (byte) (x & -x);
}
Maby not the best solution, but you can prepare an array byte[256], store the result for each number and then use it.
Another solution in 1 line:
return (byte)((~(x | (x << 1) | (x << 2) | (x << 3) | (x << 4) | (x << 5) | (x << 6) | (x << 7) | (x << 8)) + 1) >> 1);
Hardly a 1-line solution, but at least it doesn't use the hardcoded list of the bits.
I'd do it with some simple bit shifting. Basically, down-shifting the value until the lowest bit is not 0, and then upshifting 1 with the amount of performed shifts to reconstruct the least significant value. Since it's a 'while' it needs an advance zero check though, or it'll go into an infinite loop on zero.
public static Byte Puzzle(Byte x)
{
if (x == 0)
return 0;
byte shifts = 0;
while ((x & 1) == 0)
{
shifts++;
x = (Byte)(x >> 1);
}
return (Byte)(1 << shifts);
}