Java Integer.highestOneBit in C# - c#

I have been trying a lot to find an exact replacement for the Java's Integer.highestOneBit(int) in C#.
I even tried finding its source code but to no avail.
JavaDocs tells that this function:
Returns an int value with at most a single one-bit, in the position of the highest-order ("leftmost") one-bit in the specified int value.
So how would I go about implementing this in C#? Any help or link/redirection is appreciated.

This site provides an implementation that should work in C# with a few modifications:
public static uint highestOneBit(uint i)
{
i |= (i >> 1);
i |= (i >> 2);
i |= (i >> 4);
i |= (i >> 8);
i |= (i >> 16);
return i - (i >> 1);
}
http://ideone.com/oEiNcM
It basically fills all bit places lower than the highest one with 1s and then removes all except the highest bit.
Example (using only 16 bits instead of 32):
start: i = 0010000000000000
i |= (i >> 1) 0010000000000000 | 0001000000000000 -> 0011000000000000
i |= (i >> 2) 0011000000000000 | 0000110000000000 -> 0011110000000000
i |= (i >> 4) 0011110000000000 | 0000001111000000 -> 0011111111000000
i |= (i >> 8) 0011111111000000 | 0000000000111111 -> 0011111111111111
i - (i >> 1) 0011111111111111 - 0001111111111111 -> 0010000000000000

You can write your own Extension method for the int type:
public static class Extensions
{
public static int HighestOneBit(this int number)
{
return (int)Math.Pow(2, Convert.ToString(number, 2).Length - 1);
}
}
so it can be used as
int number = 170;
int result = number.HighestOneBit(); //128
or directly
int result = 170.HighestOneBit(); //128
Here's how it works:
The ToString(number, 2) writes our number in binary form (ex: 10101010 for 170). We then use the first bit position (the length - 1) to calculate the value of 2^(first bit position), which is 128 in this case. Finally, since Math.Pow returns a double, we downcast it to int.

Update: .NET Core 3.0 introduced BitOperations.LeadingZeroCount() and BitOperations.Log2() which map directly to the underlying CPU's bitwise leading zero count instruction, hence extremely efficient
public static uint highestOneBit(uint i)
{
return i == 0 ? 0 : 1 << BitOperations.Log2(i); // or
// return i == 0 ? 0 : 1 << (31 - BitOperations.LeadingZeroCount(i));
}
This is basically round down to the next power of 2 and there are so many ways to do that in the famous bithacks site. The implementations there is for rounding up to the next power of 2 so just shift right by 1 to get what you want
public static uint highestOneBit(uint i)
{
v--;
v |= v >> 1;
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
v++; // now v is the next power of 2
v >>= 1; // get the previous power of 2
}
Another way:
public static uint highestOneBit(uint v)
{
const int MultiplyDeBruijnBitPosition[32] =
{
0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30,
8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31
};
v |= v >> 1;
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
return 1 << MultiplyDeBruijnBitPosition[(v*0x07C4ACDDU) >> 27];
}

Related

Storing 4x 0-16 numbers in a short (or 2 numbers in a byte)

I'm packing some binary data as a short, but want to have 4x values of 0-F.. And would like to do this without having a bunch of switch() cases reading the string.split of a hex
Someone have a clever, elegant solution for this or should I just long-hand it?
eg; 1C4A = (1, 12, 4, 10)
Shift in and out
var a = 1;
var b = 12;
var c = 4;
var d = 10;
// in
var packed = (short) ((a << 12) | (b << 8) | (c << 4) | d);
// out
a = (packed >> 12) & 0xf;
b = (packed >> 8) & 0xf;
c = (packed >> 4) & 0xf;
d = packed & 0xF;
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
Console.WriteLine(d);
Output
1
12
4
10
You can shift by 4 (or divide and multiply by 16) to move numbers into different place values. Then mask and shift your packed number to get your original numbers back.
Eg if you want to store 1 and 2 you could do:
int packed = (1 << 4) + 2;
int v1 = (packed & 0xF0) >> 4;
int v2 = packed & 0x0F;
Console.WriteLine($"{v1}, {v2}");
>>> 1, 2

Operator shift overflow ? and UInt64

I tried to convert an objective-c project to c# .net and it was working great a few month ago but now I updated something and it gives me bad values. Maybe you will see what I'm doing wrong.
This is the original function from https://github.com/magiconair/map2sqlite/blob/master/map2sqlite.m
uint64_t RMTileKey(int tileZoom, int tileX, int tileY)
{
uint64_t zoom = (uint64_t) tileZoom & 0xFFLL; // 8bits, 256 levels
uint64_t x = (uint64_t) tileX & 0xFFFFFFFLL; // 28 bits
uint64_t y = (uint64_t) tileY & 0xFFFFFFFLL; // 28 bits
uint64_t key = (zoom << 56) | (x << 28) | (y << 0);
return key;
}
My buggy .net version :
public UInt64 RMTileKey(int tileZoom, int tileX, int tileY)
{
UInt64 zoom = (UInt64)tileZoom & 0xFFL; // 8bits, 256 levels
UInt64 x = (UInt64)tileX & 0xFFFFFFFL; // 28 bits
UInt64 y = (UInt64)tileY & 0xFFFFFFFL; // 28 bits
UInt64 key = (zoom << 56) | (x << 28) | (y << 0);
return key;
}
The parameters are : tileZoom = 1, tileX = 32, tileY = 1012
UInt64 key = (zoom << 56) | (x << 28) | (y << 0); gives me an incredibly big number.
Precisely :
if zoom = 1, it gives me zoom << 56 = 72057594037927936
if x = 32, it gives me (x << 28) = 8589934592
Maybe 0 was the result before ?
I checked the doc http://msdn.microsoft.com/en-us/library/f96c63ed(v=vs.110).aspx and it is said :
If the type is unsigned, they are set to 0. Otherwise, they are filled with copies of the sign bit. For left-shift operators without overflow, the statement
In my case it seems I get an overflow when using unsigned type or maybe my .net conversion is bad ?

How to get amount of 1s from 64 bit number [duplicate]

This question already has answers here:
Count number of bits in a 64-bit (long, big) integer?
(3 answers)
Closed 9 years ago.
Possible duplicate: Count number of bits in a 64-bit (long, big)
integer?
For an image comparison algorithm I get a 64bit number as result. The amount of 1s in the number (ulong) (101011011100...) tells me how similar two images are, so I need to count them. How would I best do this in C#?
I'd like to use this in a WinRT & Windows Phone App, so I'm also looking for a low-cost method.
EDIT: As I have to count the bits for a large number of Images, I'm wondering if the lookup-table-approach might be best. But I'm not really sure how that works...
The Sean Eron Anderson's Bit Twiddling Hacks has this trick, among others:
Counting bits set, in parallel
unsigned int v; // count bits set in this (32-bit value)
unsigned int c; // store the total here
static const int S[] = {1, 2, 4, 8, 16}; // Magic Binary Numbers
static const int B[] = {0x55555555, 0x33333333, 0x0F0F0F0F, 0x00FF00FF, 0x0000FFFF};
c = v - ((v >> 1) & B[0]);
c = ((c >> S[1]) & B[1]) + (c & B[1]);
c = ((c >> S[2]) + c) & B[2];
c = ((c >> S[3]) + c) & B[3];
c = ((c >> S[4]) + c) & B[4];
The B array, expressed as binary, is:
B[0] = 0x55555555 = 01010101 01010101 01010101 01010101
B[1] = 0x33333333 = 00110011 00110011 00110011 00110011
B[2] = 0x0F0F0F0F = 00001111 00001111 00001111 00001111
B[3] = 0x00FF00FF = 00000000 11111111 00000000 11111111
B[4] = 0x0000FFFF = 00000000 00000000 11111111 11111111
We can adjust the method for larger integer sizes by continuing with the patterns for the Binary Magic Numbers, B and S. If there are k bits, then we need the arrays S and B to be ceil(lg(k)) elements long, and we must compute the same number of expressions for c as S or B are long. For a 32-bit v, 16 operations are used.
The best method for counting bits in a 32-bit integer v is the following:
v = v - ((v >> 1) & 0x55555555); // reuse input as temporary
v = (v & 0x33333333) + ((v >> 2) & 0x33333333); // temp
c = ((v + (v >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; // count
The best bit counting method takes only 12 operations, which is the same as the lookup-table method, but avoids the memory and potential cache misses of a table. It is a hybrid between the purely parallel method above and the earlier methods using multiplies (in the section on counting bits with 64-bit instructions), though it doesn't use 64-bit instructions. The counts of bits set in the bytes is done in parallel, and the sum total of the bits set in the bytes is computed by multiplying by 0x1010101 and shifting right 24 bits.
A generalization of the best bit counting method to integers of bit-widths upto 128 (parameterized by type T) is this:
v = v - ((v >> 1) & (T)~(T)0/3); // temp
v = (v & (T)~(T)0/15*3) + ((v >> 2) & (T)~(T)0/15*3); // temp
v = (v + (v >> 4)) & (T)~(T)0/255*15; // temp
c = (T)(v * ((T)~(T)0/255)) >> (sizeof(T) - 1) * CHAR_BIT; // count
Something along these lines would do (note that this isn't tested code, I just wrote it here, so it may and probably will require tweaking).
int numberOfOnes = 0;
for (int i = 63; i >= 0; i--)
{
if ((yourUInt64 >> i) & 1 == 1) numberOfOnes++;
else continue;
}
Option 1 - less iterations if 64bit result < 2^63:
byte numOfOnes;
while (result != 0)
{
numOfOnes += (result & 0x1);
result = (result >> 1);
}
return numOfOnes;
Option 2 - constant number of interations - can use loop unrolling:
byte NumOfOnes;
for (int i = 0; i < 64; i++)
{
numOfOnes += (result & 0x1);
result = (result >> 1);
}
this is a 32-bit version of BitCount, you could easily extend this to 64-bit version by add one more right shift by 32, and it would be very efficient.
int bitCount(int x) {
/* first let res = x&0xAAAAAAAA >> 1 + x&55555555
* after that the (2k)th and (2k+1)th bits of the res
* will be the number of 1s that contained by the (2k)th
* and (2k+1)th bits of x
* we can use a similar way to caculate the number of 1s
* that contained by the (4k)th and (4k+1)th and (4k+2)th
* and (4k+3)th bits of x, so as 8, 16, 32
*/
int varA = (85 << 8) | 85;
varA = (varA << 16) | varA;
int res = ((x>>1) & varA) + (x & varA);
varA = (51 << 8) | 51;
varA = (varA << 16) | varA;
res = ((res>>2) & varA) + (res & varA);
varA = (15 << 8) | 15;
varA = (varA << 16) | varA;
res = ((res>>4) & varA) + (res & varA);
varA = (255 << 16) | 255;
res = ((res>>8) & varA) + (res & varA);
varA = (255 << 8) | 255;
res = ((res>>16) & varA) + (res & varA);
return res;
}

How to clear the most significant bit?

How do I change the most significant bit in an int from 1 to 0? For instance, I want to change 01101 to 0101.
Edit: Simplified (and explained) answer
The answer I gave below is overkill if your only goal is to set the most significant bit to zero.
That final bit of code constructs a bit mask that includes all the bits in the number.
mask |= mask >> 1;
mask |= mask >> 2;
mask |= mask >> 4;
mask |= mask >> 8;
mask |= mask >> 16;
Here is the series of calculations it performs on a given 32 bit unsigned integer:
mask = originalValue
mask: 01000000000000000000000000000000
mask |= mask >> 1: 01100000000000000000000000000000
mask |= mask >> 2: 01111000000000000000000000000000
mask |= mask >> 4: 01111111100000000000000000000000
mask |= mask >> 8: 01111111111111111000000000000000
mask |= mask >> 16: 01111111111111111111111111111111
Since it does a bit-shift right, which does no wrap around, it will never set bits higher than the most significant bit to one. Since it is using a logical or, you will never explicitly set any values to zero that weren't already zero.
Logically, this will always create a bit mask that fills the whole uint, up to and including the most significant bit that was originally set, but no higher.
From that mask it is fairly easy to shrink it to include all but the most significant bit that was originally set:
mask = mask >> 1: 00111111111111111111111111111111
Then just do a logical and against the original value, and it will set all the most significant bits in the number to zero, up to and including the most significant bit from the original value:
originalValue &= mask: 00000000000000000000000000000000
The original number I used here shows the mask-building process well, but it doesn't show that last calculation very well. Lets run through the calculations with some more interesting example values (the ones in the question):
originalValue: 1101
mask = originalValue
mask: 00000000000000000000000000001101
mask |= mask >> 1: 00000000000000000000000000001111
mask |= mask >> 2: 00000000000000000000000000001111
mask |= mask >> 4: 00000000000000000000000000001111
mask |= mask >> 8: 00000000000000000000000000001111
mask |= mask >> 16: 00000000000000000000000000001111
mask = mask >> 1: 00000000000000000000000000000111
And here is the value you're looking for:
originalValue &= mask: 00000000000000000000000000000101
Since we can see this works, lets put the final code together:
uint SetHighestBitToZero(uint originalValue)
{
uint mask = originalValue;
mask |= mask >> 1;
mask |= mask >> 2;
mask |= mask >> 4;
mask |= mask >> 8;
mask |= mask >> 16;
mask = mask >> 1;
return originalValue & mask;
}
// ...
Console.WriteLine(SetHighestBitToZero(13)); // 1101
5
(which is 0101)
Original answer I gave
For these kind of questions, I often reference this article:
"Bit Twiddling Hacks"
The particular section you want is called "Finding integer log base 2 of an integer (aka the position of the highest bit set)".
Here is the first of a series of solutions (each more optimal than the previous):
http://graphics.stanford.edu/~seander/bithacks.html#IntegerLogObvious
The final solution in the article is (converted to C#):
uint originalValue = 13;
uint v = originalValue; // find the log base 2 of 32-bit v
int r; // result goes here
uint[] MultiplyDeBruijnBitPosition =
{
0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30,
8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31
};
v |= v >> 1; // first round down to one less than a power of 2
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
r = (int)MultiplyDeBruijnBitPosition[(uint)(v * 0x07C4ACDDU) >> 27];
Once you've found the highest set bit, simply mask it out:
originalValue &= ~(uint)(1 << r); // Force bit "r" to be zero
Inspired from Merlyn Morgan-Graham's answer
static uint fxn(uint v)
{
uint i = v;
v |= v >> 1;
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
return (v >> 1) & i;
}
You could use something like the following (untested):
int a = 13; //01101
int value = 0x80000000;
while ((a & value) != value)
{
value = value >> 1;
}
if (value > 0)
a = a ^ value;
int x = 0xD; //01101
int wk = x;
int mask = 1;
while(0!=(wk >>= 1)) mask <<= 1;
x ^= mask; //00101
If you know the size of the type, you can shift out the bits.
You could also just as easily put it in a BitArray and flip the MSB.
Example 1:
short value = -5334;
var ary = new BitArray(new[] { (value & 0xFF00) >> 8 });
ary.Set(7, false);
Example 2:
short value = -5334;
var newValue = Convert.ToInt16((value << 1) >> 1);
// Or
newValue = Convert.ToUInt16(value);
the easiest way to do bitwise operations like this is to use the left shift operator (<<).
(1 << 3) = 100b, (1 << 5) = 10000b etc.
then you use a value in which you want to change a certain bit and use
| (OR) if you want to change it to a 1 or
& ~ (AND NOT) if you want to change it to a 0.
Like this:
int a = 13; //01101
int result = a | (1 << 5); //gives 11101
int result2 = result & ~(1 << 5); //gives 01101
If you will take that as integet then the code the result will always be displayed without having the zero.may be string manipulation will out put something you want but not sure how that is going to help you. Using string that will be something like below:
string y = "01101";
int pos = y.IndexOf("1");
y = y.Insert(pos, "0");
y = y.Remove(pos + 1, 1);

Faster way to swap endianness in C# with 16 bit words

There's got to be a faster and better way to swap bytes of 16bit words then this.:
public static void Swap(byte[] data)
{
for (int i = 0; i < data.Length; i += 2)
{
byte b = data[i];
data[i] = data[i + 1];
data[i + 1] = b;
}
}
Does anyone have an idea?
In my attempt to apply for the Uberhacker award, I submit the following. For my testing, I used a Source array of 8,192 bytes and called SwapX2 100,000 times:
public static unsafe void SwapX2(Byte[] source)
{
fixed (Byte* pSource = &source[0])
{
Byte* bp = pSource;
Byte* bp_stop = bp + source.Length;
while (bp < bp_stop)
{
*(UInt16*)bp = (UInt16)(*bp << 8 | *(bp + 1));
bp += 2;
}
}
}
My benchmarking indicates that this version is over 1.8 times faster than the code submitted in the original question.
This way appears to be slightly faster than the method in the original question:
private static byte[] _temp = new byte[0];
public static void Swap(byte[] data)
{
if (data.Length > _temp.Length)
{
_temp = new byte[data.Length];
}
Buffer.BlockCopy(data, 1, _temp, 0, data.Length - 1);
for (int i = 0; i < data.Length; i += 2)
{
_temp[i + 1] = data[i];
}
Buffer.BlockCopy(_temp, 0, data, 0, data.Length);
}
My benchmarking assumed that the method is called repeatedly, so that the resizing of the _temp array isn't a factor. This method relies on the fact that half of the byte-swapping can be done with the initial Buffer.BlockCopy(...) call (with the source position offset by 1).
Please benchmark this yourselves, in case I've completely lost my mind. In my tests, this method takes approximately 70% as long as the original method (which I modified to declare the byte b outside of the loop).
I always liked this:
public static Int64 SwapByteOrder(Int64 value)
{
var uvalue = (UInt64)value;
UInt64 swapped =
( (0x00000000000000FF) & (uvalue >> 56)
| (0x000000000000FF00) & (uvalue >> 40)
| (0x0000000000FF0000) & (uvalue >> 24)
| (0x00000000FF000000) & (uvalue >> 8)
| (0x000000FF00000000) & (uvalue << 8)
| (0x0000FF0000000000) & (uvalue << 24)
| (0x00FF000000000000) & (uvalue << 40)
| (0xFF00000000000000) & (uvalue << 56));
return (Int64)swapped;
}
I believe you'll find this is the fastest method as well a being fairly readable and safe. Obviously this applies to 64-bit values but the same technique could be used for 32- or 16-.
Next method, in my test, almost 3 times faster as the accepted answer. (Always faster on more than 3 characters or six bytes, a bit slower on less or equal to three characters or six bytes.) (Note that the accepted answer can read/write outside the bounds of the array.)
(Update While having a pointer there's no need to call the property to get the length. Using that pointer is a bit faster, but requires either a runtime check or, as in next example, a project configuration to build for each platform. Define X86 and X64 under each configuration.)
static unsafe void SwapV2(byte[] source)
{
fixed (byte* psource = source)
{
#if X86
var length = *((uint*)(psource - 4)) & 0xFFFFFFFEU;
#elif X64
var length = *((uint*)(psource - 8)) & 0xFFFFFFFEU;
#else
var length = (source.Length & 0xFFFFFFFE);
#endif
while (length > 7)
{
length -= 8;
ulong* pulong = (ulong*)(psource + length);
*pulong = ( ((*pulong >> 8) & 0x00FF00FF00FF00FFUL)
| ((*pulong << 8) & 0xFF00FF00FF00FF00UL));
}
if(length > 3)
{
length -= 4;
uint* puint = (uint*)(psource + length);
*puint = ( ((*puint >> 8) & 0x00FF00FFU)
| ((*puint << 8) & 0xFF00FF00U));
}
if(length > 1)
{
ushort* pushort = (ushort*)psource;
*pushort = (ushort) ( (*pushort >> 8)
| (*pushort << 8));
}
}
}
Five tests with 300.000 times 8192 bytes
SwapV2: 1055, 1051, 1043, 1041, 1044
SwapX2: 2802, 2803, 2803, 2805, 2805
Five tests with 50.000.000 times 6 bytes
SwapV2: 1092, 1085, 1086, 1087, 1086
SwapX2: 1018, 1019, 1015, 1017, 1018
But if the data is large and performance really matters, you could use SSE or AVX. (13 times faster.) https://pastebin.com/WaFk275U
Test 5 times, 100000 loops with 8192 bytes or 4096 chars
SwapX2 : 226, 223, 225, 226, 227 Min: 223
SwapV2 : 113, 111, 112, 114, 112 Min: 111
SwapA2 : 17, 17, 17, 17, 16 Min: 16
Well, you could use the XOR swapping trick, to avoid an intermediate byte. It won't be any faster, though, and I wouldn't be surprised if the IL is exactly the same.
for (int i = 0; i < data.Length; i += 2)
{
data[i] ^= data[i + 1];
data[i + 1] ^= data[i];
data[i] ^= data[i + 1];
}

Categories

Resources