Operator shift overflow ? and UInt64 - c#

I tried to convert an objective-c project to c# .net and it was working great a few month ago but now I updated something and it gives me bad values. Maybe you will see what I'm doing wrong.
This is the original function from https://github.com/magiconair/map2sqlite/blob/master/map2sqlite.m
uint64_t RMTileKey(int tileZoom, int tileX, int tileY)
{
uint64_t zoom = (uint64_t) tileZoom & 0xFFLL; // 8bits, 256 levels
uint64_t x = (uint64_t) tileX & 0xFFFFFFFLL; // 28 bits
uint64_t y = (uint64_t) tileY & 0xFFFFFFFLL; // 28 bits
uint64_t key = (zoom << 56) | (x << 28) | (y << 0);
return key;
}
My buggy .net version :
public UInt64 RMTileKey(int tileZoom, int tileX, int tileY)
{
UInt64 zoom = (UInt64)tileZoom & 0xFFL; // 8bits, 256 levels
UInt64 x = (UInt64)tileX & 0xFFFFFFFL; // 28 bits
UInt64 y = (UInt64)tileY & 0xFFFFFFFL; // 28 bits
UInt64 key = (zoom << 56) | (x << 28) | (y << 0);
return key;
}
The parameters are : tileZoom = 1, tileX = 32, tileY = 1012
UInt64 key = (zoom << 56) | (x << 28) | (y << 0); gives me an incredibly big number.
Precisely :
if zoom = 1, it gives me zoom << 56 = 72057594037927936
if x = 32, it gives me (x << 28) = 8589934592
Maybe 0 was the result before ?
I checked the doc http://msdn.microsoft.com/en-us/library/f96c63ed(v=vs.110).aspx and it is said :
If the type is unsigned, they are set to 0. Otherwise, they are filled with copies of the sign bit. For left-shift operators without overflow, the statement
In my case it seems I get an overflow when using unsigned type or maybe my .net conversion is bad ?

Related

C# getting upper 4 bits of uint32 starting with first significant bit

My need is - have some (in-fact pseudo) random number of uint32, i need it's 4 first bits stating with 1st bit, which in not 0, e.g.
...000100101 => 1001
1000...0001 => 1000
...0001 => 0001
...0000 => 0000
etc
I understand i have to use something like this
uint num = 1157 (some random number)
uint high = num >> offset
The problem is - i don't know where the first bit is so i can't use >> with constant variable. Can some one explain how to find this offset?
You can first calculate the highest significant bit (HSB) and then shift accordingly. You can this like:
int hsb = -4;
for(uint cnum = num; cnum != 0; cnum >>= 1, hsb++);
if(hsb < 0) {
hsb = 0;
}
uint result = num >> hsb;
So we first aim to detect the index of the highest set bit (or that index minus four). We do this by incrementing hsb and shifting cnum (a copy of num) to the right, until there are no set bits anymore in cnum.
Next we ensure that there is such set bit and that it has at least index four (if not, then nothing is done). The result is the original num shifted to the right by that hsb.
If I run this on 0x123, I get 0x9 in the csharp interactive shell:
csharp> uint num = 0x123;
csharp> int hsb = -4;
csharp> for(uint cnum = num; cnum != 0; cnum >>= 1, hsb++);
csharp> if(hsb < 0) {
> hsb = 0;
> }
csharp> uint result = num >> hsb;
csharp> result
9
0x123 is 0001 0010 0011 in binary. So:
0001 0010 0011
1 001
And 1001 is 9.
Determining the position of the most significant non-zero bit is the same as computing the logarithm with base 2. There are "bit shift tricks" to do that quickly on a modern CPU:
int GetLog2Plus1(uint value)
{
value |= value >> 1;
value |= value >> 2;
value |= value >> 4;
value |= value >> 8;
value |= value >> 16;
value -= value >> 1 & 0x55555555;
value = (value >> 2 & 0x33333333) + (value & 0x33333333);
value = (value >> 4) + value & 0x0F0F0F0F;
value += value >> 8;
value += value >> 16;
value &= 0x0000003F;
return (int) value;
}
This will return a number from 0 to 32:
Value | Log2 + 1
-------------------------------------------+---------
0b0000_0000_0000_0000_0000_0000_0000_0000U | 0
0b0000_0000_0000_0000_0000_0000_0000_0001U | 1
0b0000_0000_0000_0000_0000_0000_0000_0010U | 2
0b0000_0000_0000_0000_0000_0000_0000_0011U | 2
0b0000_0000_0000_0000_0000_0000_0000_0100U | 3
...
0b0111_1111_1111_1111_1111_1111_1111_1111U | 31
0b1000_0000_0000_0000_0000_0000_0000_0000U | 32
0b1000_0000_0000_0000_0000_0000_0000_0001U | 32
... |
0b1111_1111_1111_1111_1111_1111_1111_1111U | 32
(Math nitpickers will notice that the logarithm of 0 is undefined. However, I hope the table above demonstrates how that is handled and makes sense for this problem.)
You can then compute the most significant non-zero bits taking into account that you want the 4 least significant bits if the value is less than 8 (where log2 + 1 is < 4):
var log2Plus1 = GetLog2Plus1(value);
var bitsToShift = Math.Max(log2Plus1 - 4, 0);
var upper4Bits = value >> bitsToShift;

Read and extract specific number of bits of a 32-bit unsigned integer

How can I read and extract specific number of bits of a 32-bit unsigned integer in C++/C? Then, resulted values convert to floating point.
For example:
32 integer 0xC0DFCF19 for x=read 11 bits, y=read 11 bits and for z = read last 10 bits of 32 bit integer.
Thanks in advance!
Okay! Thanks a lot for all answers, very helpful!
Could someone give an example code how to read this integer 0x7835937C in the similar way, but "y" should be 334 (x,z remains the same) Thanks
Here is a demonstrative program
#include <iostream>
#include <iomanip>
#include <cstdint>
int main()
{
std::uint32_t value = 0xC0DFCF19;
std::uint32_t x = value & 0x7FF;
std::uint32_t y = ( value & ( 0x7FF << 11 ) ) >> 11;
std::uint32_t z = ( value & ( 0x3FF << 22 ) ) >> 22;
std::cout << "value = " << std::hex << value << std::endl;
std::cout << "x = " << std::hex << x << std::endl;
std::cout << "y = " << std::hex << y << std::endl;
std::cout << "z = " << std::hex << z << std::endl;
return 0;
}
The program output is
value = c0dfcf19
x = 719
y = 3f9
z = 303
You may declare variables x, y, z like floats if you need.
The general idea is to mask part of the bits using bitwise boolean operations and the shift operator >> to move the bits to the desired location.
Specifically the & operator allows you to let specific bits as they are, and set all others to zero.
As in your example for y you need to mask bits 11-21 from the 32 bits of the input like in:
input xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
& with 00000000000111111111110000000000
= 00000000000xxxxxxxxxxx0000000000
shift by 10 bits to the right
= 000000000000000000000xxxxxxxxxxx
Since you cannot write bits in C style, you have to convert the bitmask to hex.
The conversion to floating point is handled inplicitly, when you assign it a double or float variable, assuming you want to convert the integer-value (e.g. 5) to floating point (e.g. 5.0).
Assuming that your bits are partitioned as
xxxxxxxxxxxyyyyyyyyyyyzzzzzzzzzz
and you want double results your example would then be as following:
int v = 0; //input
double x = v >> 21;
double y = (v & 0x1FFC00) >> 10;
double z = v & 0x3FF;

why does this function translate that way with bitwise operators?

I was writing a control class for a device until I got to the point I needed to convert an ARGB color into its format. at first, I wrote this function (which worked):
private static int convertFormat(System.Drawing.Color c)
{
String all;
int a = (int)((float)c.A / 31.875);
if (a == 0)
a = 1;
all = a.ToString() + c.B.ToString("X").PadLeft(2, '0') + c.G.ToString("X").PadLeft(2, '0') + c.R.ToString("X").PadLeft(2, '0');
int num = int.Parse(all, System.Globalization.NumberStyles.AllowHexSpecifier);
return num;
}
but it was so ugly I wanted to write a more elegant solution.
So I did some for to get the correct values, trying all combinations between 0 and 50.
It worked, and I ended up with this:
private static int convertFormatShifting(System.Drawing.Color c)
{
int alpha = (int)Math.Round((float)c.A / 31.875);
int a = Math.Max(alpha,1);
return (a << 24) | (c.B << 48) | (c.G << 40) | (c.R << 32);
}
which works!
but now, I would love someone to explain me why these are the correct shifting values.
The least confusing shift values should be as follows:
return (a << 24) | (c.B << 16) | (c.G << 8) | (c.R << 0);
// Of course there's no need to shift by zero ^^^^^^^^
The reason your values work is that the shift operator mods the right-hand side with the bit length of the operand on the left. In other words, all of the following shifts are equivalent to each other:
c.G << 8
c.G << 40
c.G << 72
c.G << 104
...
According to Wikipedia, the RGBA format follows the following convention:
semantics alpha | red | green | blue
bits 31-24 |23-16|15 - 8|7 - 0
This is something similar to what your shifts do. You are moving the bits to the left and concatenating them with a bit wise or. And then take a look at what is your function expected to return (what's the order of the color within the int).

How to get amount of 1s from 64 bit number [duplicate]

This question already has answers here:
Count number of bits in a 64-bit (long, big) integer?
(3 answers)
Closed 9 years ago.
Possible duplicate: Count number of bits in a 64-bit (long, big)
integer?
For an image comparison algorithm I get a 64bit number as result. The amount of 1s in the number (ulong) (101011011100...) tells me how similar two images are, so I need to count them. How would I best do this in C#?
I'd like to use this in a WinRT & Windows Phone App, so I'm also looking for a low-cost method.
EDIT: As I have to count the bits for a large number of Images, I'm wondering if the lookup-table-approach might be best. But I'm not really sure how that works...
The Sean Eron Anderson's Bit Twiddling Hacks has this trick, among others:
Counting bits set, in parallel
unsigned int v; // count bits set in this (32-bit value)
unsigned int c; // store the total here
static const int S[] = {1, 2, 4, 8, 16}; // Magic Binary Numbers
static const int B[] = {0x55555555, 0x33333333, 0x0F0F0F0F, 0x00FF00FF, 0x0000FFFF};
c = v - ((v >> 1) & B[0]);
c = ((c >> S[1]) & B[1]) + (c & B[1]);
c = ((c >> S[2]) + c) & B[2];
c = ((c >> S[3]) + c) & B[3];
c = ((c >> S[4]) + c) & B[4];
The B array, expressed as binary, is:
B[0] = 0x55555555 = 01010101 01010101 01010101 01010101
B[1] = 0x33333333 = 00110011 00110011 00110011 00110011
B[2] = 0x0F0F0F0F = 00001111 00001111 00001111 00001111
B[3] = 0x00FF00FF = 00000000 11111111 00000000 11111111
B[4] = 0x0000FFFF = 00000000 00000000 11111111 11111111
We can adjust the method for larger integer sizes by continuing with the patterns for the Binary Magic Numbers, B and S. If there are k bits, then we need the arrays S and B to be ceil(lg(k)) elements long, and we must compute the same number of expressions for c as S or B are long. For a 32-bit v, 16 operations are used.
The best method for counting bits in a 32-bit integer v is the following:
v = v - ((v >> 1) & 0x55555555); // reuse input as temporary
v = (v & 0x33333333) + ((v >> 2) & 0x33333333); // temp
c = ((v + (v >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; // count
The best bit counting method takes only 12 operations, which is the same as the lookup-table method, but avoids the memory and potential cache misses of a table. It is a hybrid between the purely parallel method above and the earlier methods using multiplies (in the section on counting bits with 64-bit instructions), though it doesn't use 64-bit instructions. The counts of bits set in the bytes is done in parallel, and the sum total of the bits set in the bytes is computed by multiplying by 0x1010101 and shifting right 24 bits.
A generalization of the best bit counting method to integers of bit-widths upto 128 (parameterized by type T) is this:
v = v - ((v >> 1) & (T)~(T)0/3); // temp
v = (v & (T)~(T)0/15*3) + ((v >> 2) & (T)~(T)0/15*3); // temp
v = (v + (v >> 4)) & (T)~(T)0/255*15; // temp
c = (T)(v * ((T)~(T)0/255)) >> (sizeof(T) - 1) * CHAR_BIT; // count
Something along these lines would do (note that this isn't tested code, I just wrote it here, so it may and probably will require tweaking).
int numberOfOnes = 0;
for (int i = 63; i >= 0; i--)
{
if ((yourUInt64 >> i) & 1 == 1) numberOfOnes++;
else continue;
}
Option 1 - less iterations if 64bit result < 2^63:
byte numOfOnes;
while (result != 0)
{
numOfOnes += (result & 0x1);
result = (result >> 1);
}
return numOfOnes;
Option 2 - constant number of interations - can use loop unrolling:
byte NumOfOnes;
for (int i = 0; i < 64; i++)
{
numOfOnes += (result & 0x1);
result = (result >> 1);
}
this is a 32-bit version of BitCount, you could easily extend this to 64-bit version by add one more right shift by 32, and it would be very efficient.
int bitCount(int x) {
/* first let res = x&0xAAAAAAAA >> 1 + x&55555555
* after that the (2k)th and (2k+1)th bits of the res
* will be the number of 1s that contained by the (2k)th
* and (2k+1)th bits of x
* we can use a similar way to caculate the number of 1s
* that contained by the (4k)th and (4k+1)th and (4k+2)th
* and (4k+3)th bits of x, so as 8, 16, 32
*/
int varA = (85 << 8) | 85;
varA = (varA << 16) | varA;
int res = ((x>>1) & varA) + (x & varA);
varA = (51 << 8) | 51;
varA = (varA << 16) | varA;
res = ((res>>2) & varA) + (res & varA);
varA = (15 << 8) | 15;
varA = (varA << 16) | varA;
res = ((res>>4) & varA) + (res & varA);
varA = (255 << 16) | 255;
res = ((res>>8) & varA) + (res & varA);
varA = (255 << 8) | 255;
res = ((res>>16) & varA) + (res & varA);
return res;
}

How to clear the most significant bit?

How do I change the most significant bit in an int from 1 to 0? For instance, I want to change 01101 to 0101.
Edit: Simplified (and explained) answer
The answer I gave below is overkill if your only goal is to set the most significant bit to zero.
That final bit of code constructs a bit mask that includes all the bits in the number.
mask |= mask >> 1;
mask |= mask >> 2;
mask |= mask >> 4;
mask |= mask >> 8;
mask |= mask >> 16;
Here is the series of calculations it performs on a given 32 bit unsigned integer:
mask = originalValue
mask: 01000000000000000000000000000000
mask |= mask >> 1: 01100000000000000000000000000000
mask |= mask >> 2: 01111000000000000000000000000000
mask |= mask >> 4: 01111111100000000000000000000000
mask |= mask >> 8: 01111111111111111000000000000000
mask |= mask >> 16: 01111111111111111111111111111111
Since it does a bit-shift right, which does no wrap around, it will never set bits higher than the most significant bit to one. Since it is using a logical or, you will never explicitly set any values to zero that weren't already zero.
Logically, this will always create a bit mask that fills the whole uint, up to and including the most significant bit that was originally set, but no higher.
From that mask it is fairly easy to shrink it to include all but the most significant bit that was originally set:
mask = mask >> 1: 00111111111111111111111111111111
Then just do a logical and against the original value, and it will set all the most significant bits in the number to zero, up to and including the most significant bit from the original value:
originalValue &= mask: 00000000000000000000000000000000
The original number I used here shows the mask-building process well, but it doesn't show that last calculation very well. Lets run through the calculations with some more interesting example values (the ones in the question):
originalValue: 1101
mask = originalValue
mask: 00000000000000000000000000001101
mask |= mask >> 1: 00000000000000000000000000001111
mask |= mask >> 2: 00000000000000000000000000001111
mask |= mask >> 4: 00000000000000000000000000001111
mask |= mask >> 8: 00000000000000000000000000001111
mask |= mask >> 16: 00000000000000000000000000001111
mask = mask >> 1: 00000000000000000000000000000111
And here is the value you're looking for:
originalValue &= mask: 00000000000000000000000000000101
Since we can see this works, lets put the final code together:
uint SetHighestBitToZero(uint originalValue)
{
uint mask = originalValue;
mask |= mask >> 1;
mask |= mask >> 2;
mask |= mask >> 4;
mask |= mask >> 8;
mask |= mask >> 16;
mask = mask >> 1;
return originalValue & mask;
}
// ...
Console.WriteLine(SetHighestBitToZero(13)); // 1101
5
(which is 0101)
Original answer I gave
For these kind of questions, I often reference this article:
"Bit Twiddling Hacks"
The particular section you want is called "Finding integer log base 2 of an integer (aka the position of the highest bit set)".
Here is the first of a series of solutions (each more optimal than the previous):
http://graphics.stanford.edu/~seander/bithacks.html#IntegerLogObvious
The final solution in the article is (converted to C#):
uint originalValue = 13;
uint v = originalValue; // find the log base 2 of 32-bit v
int r; // result goes here
uint[] MultiplyDeBruijnBitPosition =
{
0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30,
8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31
};
v |= v >> 1; // first round down to one less than a power of 2
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
r = (int)MultiplyDeBruijnBitPosition[(uint)(v * 0x07C4ACDDU) >> 27];
Once you've found the highest set bit, simply mask it out:
originalValue &= ~(uint)(1 << r); // Force bit "r" to be zero
Inspired from Merlyn Morgan-Graham's answer
static uint fxn(uint v)
{
uint i = v;
v |= v >> 1;
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
return (v >> 1) & i;
}
You could use something like the following (untested):
int a = 13; //01101
int value = 0x80000000;
while ((a & value) != value)
{
value = value >> 1;
}
if (value > 0)
a = a ^ value;
int x = 0xD; //01101
int wk = x;
int mask = 1;
while(0!=(wk >>= 1)) mask <<= 1;
x ^= mask; //00101
If you know the size of the type, you can shift out the bits.
You could also just as easily put it in a BitArray and flip the MSB.
Example 1:
short value = -5334;
var ary = new BitArray(new[] { (value & 0xFF00) >> 8 });
ary.Set(7, false);
Example 2:
short value = -5334;
var newValue = Convert.ToInt16((value << 1) >> 1);
// Or
newValue = Convert.ToUInt16(value);
the easiest way to do bitwise operations like this is to use the left shift operator (<<).
(1 << 3) = 100b, (1 << 5) = 10000b etc.
then you use a value in which you want to change a certain bit and use
| (OR) if you want to change it to a 1 or
& ~ (AND NOT) if you want to change it to a 0.
Like this:
int a = 13; //01101
int result = a | (1 << 5); //gives 11101
int result2 = result & ~(1 << 5); //gives 01101
If you will take that as integet then the code the result will always be displayed without having the zero.may be string manipulation will out put something you want but not sure how that is going to help you. Using string that will be something like below:
string y = "01101";
int pos = y.IndexOf("1");
y = y.Insert(pos, "0");
y = y.Remove(pos + 1, 1);

Categories

Resources