How to clear the most significant bit? - c#

How do I change the most significant bit in an int from 1 to 0? For instance, I want to change 01101 to 0101.

Edit: Simplified (and explained) answer
The answer I gave below is overkill if your only goal is to set the most significant bit to zero.
That final bit of code constructs a bit mask that includes all the bits in the number.
mask |= mask >> 1;
mask |= mask >> 2;
mask |= mask >> 4;
mask |= mask >> 8;
mask |= mask >> 16;
Here is the series of calculations it performs on a given 32 bit unsigned integer:
mask = originalValue
mask: 01000000000000000000000000000000
mask |= mask >> 1: 01100000000000000000000000000000
mask |= mask >> 2: 01111000000000000000000000000000
mask |= mask >> 4: 01111111100000000000000000000000
mask |= mask >> 8: 01111111111111111000000000000000
mask |= mask >> 16: 01111111111111111111111111111111
Since it does a bit-shift right, which does no wrap around, it will never set bits higher than the most significant bit to one. Since it is using a logical or, you will never explicitly set any values to zero that weren't already zero.
Logically, this will always create a bit mask that fills the whole uint, up to and including the most significant bit that was originally set, but no higher.
From that mask it is fairly easy to shrink it to include all but the most significant bit that was originally set:
mask = mask >> 1: 00111111111111111111111111111111
Then just do a logical and against the original value, and it will set all the most significant bits in the number to zero, up to and including the most significant bit from the original value:
originalValue &= mask: 00000000000000000000000000000000
The original number I used here shows the mask-building process well, but it doesn't show that last calculation very well. Lets run through the calculations with some more interesting example values (the ones in the question):
originalValue: 1101
mask = originalValue
mask: 00000000000000000000000000001101
mask |= mask >> 1: 00000000000000000000000000001111
mask |= mask >> 2: 00000000000000000000000000001111
mask |= mask >> 4: 00000000000000000000000000001111
mask |= mask >> 8: 00000000000000000000000000001111
mask |= mask >> 16: 00000000000000000000000000001111
mask = mask >> 1: 00000000000000000000000000000111
And here is the value you're looking for:
originalValue &= mask: 00000000000000000000000000000101
Since we can see this works, lets put the final code together:
uint SetHighestBitToZero(uint originalValue)
{
uint mask = originalValue;
mask |= mask >> 1;
mask |= mask >> 2;
mask |= mask >> 4;
mask |= mask >> 8;
mask |= mask >> 16;
mask = mask >> 1;
return originalValue & mask;
}
// ...
Console.WriteLine(SetHighestBitToZero(13)); // 1101
5
(which is 0101)
Original answer I gave
For these kind of questions, I often reference this article:
"Bit Twiddling Hacks"
The particular section you want is called "Finding integer log base 2 of an integer (aka the position of the highest bit set)".
Here is the first of a series of solutions (each more optimal than the previous):
http://graphics.stanford.edu/~seander/bithacks.html#IntegerLogObvious
The final solution in the article is (converted to C#):
uint originalValue = 13;
uint v = originalValue; // find the log base 2 of 32-bit v
int r; // result goes here
uint[] MultiplyDeBruijnBitPosition =
{
0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30,
8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31
};
v |= v >> 1; // first round down to one less than a power of 2
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
r = (int)MultiplyDeBruijnBitPosition[(uint)(v * 0x07C4ACDDU) >> 27];
Once you've found the highest set bit, simply mask it out:
originalValue &= ~(uint)(1 << r); // Force bit "r" to be zero

Inspired from Merlyn Morgan-Graham's answer
static uint fxn(uint v)
{
uint i = v;
v |= v >> 1;
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
return (v >> 1) & i;
}

You could use something like the following (untested):
int a = 13; //01101
int value = 0x80000000;
while ((a & value) != value)
{
value = value >> 1;
}
if (value > 0)
a = a ^ value;

int x = 0xD; //01101
int wk = x;
int mask = 1;
while(0!=(wk >>= 1)) mask <<= 1;
x ^= mask; //00101

If you know the size of the type, you can shift out the bits.
You could also just as easily put it in a BitArray and flip the MSB.
Example 1:
short value = -5334;
var ary = new BitArray(new[] { (value & 0xFF00) >> 8 });
ary.Set(7, false);
Example 2:
short value = -5334;
var newValue = Convert.ToInt16((value << 1) >> 1);
// Or
newValue = Convert.ToUInt16(value);

the easiest way to do bitwise operations like this is to use the left shift operator (<<).
(1 << 3) = 100b, (1 << 5) = 10000b etc.
then you use a value in which you want to change a certain bit and use
| (OR) if you want to change it to a 1 or
& ~ (AND NOT) if you want to change it to a 0.
Like this:
int a = 13; //01101
int result = a | (1 << 5); //gives 11101
int result2 = result & ~(1 << 5); //gives 01101

If you will take that as integet then the code the result will always be displayed without having the zero.may be string manipulation will out put something you want but not sure how that is going to help you. Using string that will be something like below:
string y = "01101";
int pos = y.IndexOf("1");
y = y.Insert(pos, "0");
y = y.Remove(pos + 1, 1);

Related

Why is RandomNumberGenerator.Fill used instead of RandomNumberGenerator.GetInt32?

I was browsing .NET Core Source code and i came across this piece of code.
If the NETCOREAPP flag is true the code is basically this
private static char GetRandomRecoveryCodeChar()
{
// Based on RandomNumberGenerator implementation of GetInt32
uint range = (uint)AllowedChars.Length - 1;
// Create a mask for the bits that we care about for the range. The other bits will be
// masked away.
uint mask = range;
mask |= mask >> 1;
mask |= mask >> 2;
mask |= mask >> 4;
mask |= mask >> 8;
mask |= mask >> 16;
Span<uint> resultBuffer = stackalloc uint[1];
uint result;
do
{
RandomNumberGenerator.Fill(MemoryMarshal.AsBytes(resultBuffer));
result = mask & resultBuffer[0];
}
while (result > range);
return AllowedChars[(int)result];
}
Why isn't the method implemented with something which is appearently simpler like this?
private static char GetRandomRecoveryCodeChar()
{
int rInt = RandomNumberGenerator.GetInt32(0, AllowedChars.Length);
return AllowedChars[rInt];
}
Where's the catch?

How to perform bit copy in C# or byte array left shift operation

I want to store information in concatenated fashion in a byte array (2 Bits + 6 Byte + 14 Bits) but I have no idea how to do it. With Buffer.Blockcopy, I can work with bytes, not bits. Moreover, 6 Byte data is already a part of data structure and referenced everywhere, so I don't wish to change that variable. I was thinking of declaring a byte then making a binary string cutting and concatenating, but I think it will be a very bad implementation. There must be a better and efficient way of doing this.
Here is the C# script that I am currently using, but I think there must be better and efficient way to do this.
byte[] ARC = new byte[7]{0x00,0xB1,0x1C,0x3D,0x4C,0x1A,0xEF};
RotateLeft(ARC);
RotateLeft(ARC);
RotateLeft(ARC);
RotateLeft(ARC);
RotateLeft(ARC);
RotateLeft(ARC);
//temp = Convert.ToString(ARC[1], 2).PadLeft(8,'0');
//MessageBox.Show(temp.ToString());
//MessageBox.Show(ByteArrayToString(ARC));
ARC[0] |= 1 << 6;
ARC[0] |= 1 << 7;
byte SB = 0x02;
string Hex = Convert.ToString(SB, 2);
Hex = Hex.Substring(0, 2);
Hex += "000000";
byte SB_processed = Convert.ToByte(Hex,2);
SB_processed |= 1 << 0;
SB_processed |= 1 << 1;
SB_processed |= 1 << 2;
SB_processed |= 1 << 3;
SB_processed |= 1 << 4;
SB_processed |= 1 << 5;
ARC[0] &= SB_processed;
MessageBox.Show(ByteArrayToString(ARC));
Logic for Left shifting taken from this reference.
c# - left shift an entire byte array
I can't well understand your question, but here I show you how to concatenate bits.
byte bits2 = 0b_000000_10;
byte bits6 = 0b_00_101100;
ushort bits14 = 0b_00_10110010001110;
uint concatenated = ((uint)bits2 << 20) + ((uint)bits6 << 14) + (uint)bits14;
uint test = 0b_00_10_101100_10110010001110;
if (concatenated == test‬)
{
Console.WriteLine("Ok");
}

C# getting upper 4 bits of uint32 starting with first significant bit

My need is - have some (in-fact pseudo) random number of uint32, i need it's 4 first bits stating with 1st bit, which in not 0, e.g.
...000100101 => 1001
1000...0001 => 1000
...0001 => 0001
...0000 => 0000
etc
I understand i have to use something like this
uint num = 1157 (some random number)
uint high = num >> offset
The problem is - i don't know where the first bit is so i can't use >> with constant variable. Can some one explain how to find this offset?
You can first calculate the highest significant bit (HSB) and then shift accordingly. You can this like:
int hsb = -4;
for(uint cnum = num; cnum != 0; cnum >>= 1, hsb++);
if(hsb < 0) {
hsb = 0;
}
uint result = num >> hsb;
So we first aim to detect the index of the highest set bit (or that index minus four). We do this by incrementing hsb and shifting cnum (a copy of num) to the right, until there are no set bits anymore in cnum.
Next we ensure that there is such set bit and that it has at least index four (if not, then nothing is done). The result is the original num shifted to the right by that hsb.
If I run this on 0x123, I get 0x9 in the csharp interactive shell:
csharp> uint num = 0x123;
csharp> int hsb = -4;
csharp> for(uint cnum = num; cnum != 0; cnum >>= 1, hsb++);
csharp> if(hsb < 0) {
> hsb = 0;
> }
csharp> uint result = num >> hsb;
csharp> result
9
0x123 is 0001 0010 0011 in binary. So:
0001 0010 0011
1 001
And 1001 is 9.
Determining the position of the most significant non-zero bit is the same as computing the logarithm with base 2. There are "bit shift tricks" to do that quickly on a modern CPU:
int GetLog2Plus1(uint value)
{
value |= value >> 1;
value |= value >> 2;
value |= value >> 4;
value |= value >> 8;
value |= value >> 16;
value -= value >> 1 & 0x55555555;
value = (value >> 2 & 0x33333333) + (value & 0x33333333);
value = (value >> 4) + value & 0x0F0F0F0F;
value += value >> 8;
value += value >> 16;
value &= 0x0000003F;
return (int) value;
}
This will return a number from 0 to 32:
Value | Log2 + 1
-------------------------------------------+---------
0b0000_0000_0000_0000_0000_0000_0000_0000U | 0
0b0000_0000_0000_0000_0000_0000_0000_0001U | 1
0b0000_0000_0000_0000_0000_0000_0000_0010U | 2
0b0000_0000_0000_0000_0000_0000_0000_0011U | 2
0b0000_0000_0000_0000_0000_0000_0000_0100U | 3
...
0b0111_1111_1111_1111_1111_1111_1111_1111U | 31
0b1000_0000_0000_0000_0000_0000_0000_0000U | 32
0b1000_0000_0000_0000_0000_0000_0000_0001U | 32
... |
0b1111_1111_1111_1111_1111_1111_1111_1111U | 32
(Math nitpickers will notice that the logarithm of 0 is undefined. However, I hope the table above demonstrates how that is handled and makes sense for this problem.)
You can then compute the most significant non-zero bits taking into account that you want the 4 least significant bits if the value is less than 8 (where log2 + 1 is < 4):
var log2Plus1 = GetLog2Plus1(value);
var bitsToShift = Math.Max(log2Plus1 - 4, 0);
var upper4Bits = value >> bitsToShift;

Java Integer.highestOneBit in C#

I have been trying a lot to find an exact replacement for the Java's Integer.highestOneBit(int) in C#.
I even tried finding its source code but to no avail.
JavaDocs tells that this function:
Returns an int value with at most a single one-bit, in the position of the highest-order ("leftmost") one-bit in the specified int value.
So how would I go about implementing this in C#? Any help or link/redirection is appreciated.
This site provides an implementation that should work in C# with a few modifications:
public static uint highestOneBit(uint i)
{
i |= (i >> 1);
i |= (i >> 2);
i |= (i >> 4);
i |= (i >> 8);
i |= (i >> 16);
return i - (i >> 1);
}
http://ideone.com/oEiNcM
It basically fills all bit places lower than the highest one with 1s and then removes all except the highest bit.
Example (using only 16 bits instead of 32):
start: i = 0010000000000000
i |= (i >> 1) 0010000000000000 | 0001000000000000 -> 0011000000000000
i |= (i >> 2) 0011000000000000 | 0000110000000000 -> 0011110000000000
i |= (i >> 4) 0011110000000000 | 0000001111000000 -> 0011111111000000
i |= (i >> 8) 0011111111000000 | 0000000000111111 -> 0011111111111111
i - (i >> 1) 0011111111111111 - 0001111111111111 -> 0010000000000000
You can write your own Extension method for the int type:
public static class Extensions
{
public static int HighestOneBit(this int number)
{
return (int)Math.Pow(2, Convert.ToString(number, 2).Length - 1);
}
}
so it can be used as
int number = 170;
int result = number.HighestOneBit(); //128
or directly
int result = 170.HighestOneBit(); //128
Here's how it works:
The ToString(number, 2) writes our number in binary form (ex: 10101010 for 170). We then use the first bit position (the length - 1) to calculate the value of 2^(first bit position), which is 128 in this case. Finally, since Math.Pow returns a double, we downcast it to int.
Update: .NET Core 3.0 introduced BitOperations.LeadingZeroCount() and BitOperations.Log2() which map directly to the underlying CPU's bitwise leading zero count instruction, hence extremely efficient
public static uint highestOneBit(uint i)
{
return i == 0 ? 0 : 1 << BitOperations.Log2(i); // or
// return i == 0 ? 0 : 1 << (31 - BitOperations.LeadingZeroCount(i));
}
This is basically round down to the next power of 2 and there are so many ways to do that in the famous bithacks site. The implementations there is for rounding up to the next power of 2 so just shift right by 1 to get what you want
public static uint highestOneBit(uint i)
{
v--;
v |= v >> 1;
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
v++; // now v is the next power of 2
v >>= 1; // get the previous power of 2
}
Another way:
public static uint highestOneBit(uint v)
{
const int MultiplyDeBruijnBitPosition[32] =
{
0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30,
8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31
};
v |= v >> 1;
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
return 1 << MultiplyDeBruijnBitPosition[(v*0x07C4ACDDU) >> 27];
}

Operator shift overflow ? and UInt64

I tried to convert an objective-c project to c# .net and it was working great a few month ago but now I updated something and it gives me bad values. Maybe you will see what I'm doing wrong.
This is the original function from https://github.com/magiconair/map2sqlite/blob/master/map2sqlite.m
uint64_t RMTileKey(int tileZoom, int tileX, int tileY)
{
uint64_t zoom = (uint64_t) tileZoom & 0xFFLL; // 8bits, 256 levels
uint64_t x = (uint64_t) tileX & 0xFFFFFFFLL; // 28 bits
uint64_t y = (uint64_t) tileY & 0xFFFFFFFLL; // 28 bits
uint64_t key = (zoom << 56) | (x << 28) | (y << 0);
return key;
}
My buggy .net version :
public UInt64 RMTileKey(int tileZoom, int tileX, int tileY)
{
UInt64 zoom = (UInt64)tileZoom & 0xFFL; // 8bits, 256 levels
UInt64 x = (UInt64)tileX & 0xFFFFFFFL; // 28 bits
UInt64 y = (UInt64)tileY & 0xFFFFFFFL; // 28 bits
UInt64 key = (zoom << 56) | (x << 28) | (y << 0);
return key;
}
The parameters are : tileZoom = 1, tileX = 32, tileY = 1012
UInt64 key = (zoom << 56) | (x << 28) | (y << 0); gives me an incredibly big number.
Precisely :
if zoom = 1, it gives me zoom << 56 = 72057594037927936
if x = 32, it gives me (x << 28) = 8589934592
Maybe 0 was the result before ?
I checked the doc http://msdn.microsoft.com/en-us/library/f96c63ed(v=vs.110).aspx and it is said :
If the type is unsigned, they are set to 0. Otherwise, they are filled with copies of the sign bit. For left-shift operators without overflow, the statement
In my case it seems I get an overflow when using unsigned type or maybe my .net conversion is bad ?

Categories

Resources