why does this function translate that way with bitwise operators? - c#

I was writing a control class for a device until I got to the point I needed to convert an ARGB color into its format. at first, I wrote this function (which worked):
private static int convertFormat(System.Drawing.Color c)
{
String all;
int a = (int)((float)c.A / 31.875);
if (a == 0)
a = 1;
all = a.ToString() + c.B.ToString("X").PadLeft(2, '0') + c.G.ToString("X").PadLeft(2, '0') + c.R.ToString("X").PadLeft(2, '0');
int num = int.Parse(all, System.Globalization.NumberStyles.AllowHexSpecifier);
return num;
}
but it was so ugly I wanted to write a more elegant solution.
So I did some for to get the correct values, trying all combinations between 0 and 50.
It worked, and I ended up with this:
private static int convertFormatShifting(System.Drawing.Color c)
{
int alpha = (int)Math.Round((float)c.A / 31.875);
int a = Math.Max(alpha,1);
return (a << 24) | (c.B << 48) | (c.G << 40) | (c.R << 32);
}
which works!
but now, I would love someone to explain me why these are the correct shifting values.

The least confusing shift values should be as follows:
return (a << 24) | (c.B << 16) | (c.G << 8) | (c.R << 0);
// Of course there's no need to shift by zero ^^^^^^^^
The reason your values work is that the shift operator mods the right-hand side with the bit length of the operand on the left. In other words, all of the following shifts are equivalent to each other:
c.G << 8
c.G << 40
c.G << 72
c.G << 104
...

According to Wikipedia, the RGBA format follows the following convention:
semantics alpha | red | green | blue
bits 31-24 |23-16|15 - 8|7 - 0
This is something similar to what your shifts do. You are moving the bits to the left and concatenating them with a bit wise or. And then take a look at what is your function expected to return (what's the order of the color within the int).

Related

Storing 4x 0-16 numbers in a short (or 2 numbers in a byte)

I'm packing some binary data as a short, but want to have 4x values of 0-F.. And would like to do this without having a bunch of switch() cases reading the string.split of a hex
Someone have a clever, elegant solution for this or should I just long-hand it?
eg; 1C4A = (1, 12, 4, 10)
Shift in and out
var a = 1;
var b = 12;
var c = 4;
var d = 10;
// in
var packed = (short) ((a << 12) | (b << 8) | (c << 4) | d);
// out
a = (packed >> 12) & 0xf;
b = (packed >> 8) & 0xf;
c = (packed >> 4) & 0xf;
d = packed & 0xF;
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
Console.WriteLine(d);
Output
1
12
4
10
You can shift by 4 (or divide and multiply by 16) to move numbers into different place values. Then mask and shift your packed number to get your original numbers back.
Eg if you want to store 1 and 2 you could do:
int packed = (1 << 4) + 2;
int v1 = (packed & 0xF0) >> 4;
int v2 = packed & 0x0F;
Console.WriteLine($"{v1}, {v2}");
>>> 1, 2

How to get the number represented by least-significant non-zero bit efficiently

For example, if it is 0xc0, then the result is 0x40,
Since 0xc0 is equal to binary 11000000, the result should be 01000000.
public static byte Puzzle(byte x) {
byte result = 0;
byte[] masks = new byte[]{1,2,4,8,16,32,64,128};
foreach(var mask in masks)
{
if((x&mask)!=0)
{
return mask;
}
}
return 0;
}
This is my current solution. It turns out this question can be solved in 3-4 lines...
public static byte Puzzle(byte x) {
return (byte) (x & (~x ^ -x));
}
if x is bbbb1000, ~x is BBBB0111 (B is !b)
-x is really ~x+1, (2's complement) so adding 1 to BBBB0111 is BBBB1000
~x ^ -x then is 00001111 & with x gives the lowest 1 bit.
A better answer was supplied by harold in the comments
public static byte Puzzle(byte x) {
return (byte) (x & -x);
}
Maby not the best solution, but you can prepare an array byte[256], store the result for each number and then use it.
Another solution in 1 line:
return (byte)((~(x | (x << 1) | (x << 2) | (x << 3) | (x << 4) | (x << 5) | (x << 6) | (x << 7) | (x << 8)) + 1) >> 1);
Hardly a 1-line solution, but at least it doesn't use the hardcoded list of the bits.
I'd do it with some simple bit shifting. Basically, down-shifting the value until the lowest bit is not 0, and then upshifting 1 with the amount of performed shifts to reconstruct the least significant value. Since it's a 'while' it needs an advance zero check though, or it'll go into an infinite loop on zero.
public static Byte Puzzle(Byte x)
{
if (x == 0)
return 0;
byte shifts = 0;
while ((x & 1) == 0)
{
shifts++;
x = (Byte)(x >> 1);
}
return (Byte)(1 << shifts);
}

Read and extract specific number of bits of a 32-bit unsigned integer

How can I read and extract specific number of bits of a 32-bit unsigned integer in C++/C? Then, resulted values convert to floating point.
For example:
32 integer 0xC0DFCF19 for x=read 11 bits, y=read 11 bits and for z = read last 10 bits of 32 bit integer.
Thanks in advance!
Okay! Thanks a lot for all answers, very helpful!
Could someone give an example code how to read this integer 0x7835937C in the similar way, but "y" should be 334 (x,z remains the same) Thanks
Here is a demonstrative program
#include <iostream>
#include <iomanip>
#include <cstdint>
int main()
{
std::uint32_t value = 0xC0DFCF19;
std::uint32_t x = value & 0x7FF;
std::uint32_t y = ( value & ( 0x7FF << 11 ) ) >> 11;
std::uint32_t z = ( value & ( 0x3FF << 22 ) ) >> 22;
std::cout << "value = " << std::hex << value << std::endl;
std::cout << "x = " << std::hex << x << std::endl;
std::cout << "y = " << std::hex << y << std::endl;
std::cout << "z = " << std::hex << z << std::endl;
return 0;
}
The program output is
value = c0dfcf19
x = 719
y = 3f9
z = 303
You may declare variables x, y, z like floats if you need.
The general idea is to mask part of the bits using bitwise boolean operations and the shift operator >> to move the bits to the desired location.
Specifically the & operator allows you to let specific bits as they are, and set all others to zero.
As in your example for y you need to mask bits 11-21 from the 32 bits of the input like in:
input xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
& with 00000000000111111111110000000000
= 00000000000xxxxxxxxxxx0000000000
shift by 10 bits to the right
= 000000000000000000000xxxxxxxxxxx
Since you cannot write bits in C style, you have to convert the bitmask to hex.
The conversion to floating point is handled inplicitly, when you assign it a double or float variable, assuming you want to convert the integer-value (e.g. 5) to floating point (e.g. 5.0).
Assuming that your bits are partitioned as
xxxxxxxxxxxyyyyyyyyyyyzzzzzzzzzz
and you want double results your example would then be as following:
int v = 0; //input
double x = v >> 21;
double y = (v & 0x1FFC00) >> 10;
double z = v & 0x3FF;

Operator shift overflow ? and UInt64

I tried to convert an objective-c project to c# .net and it was working great a few month ago but now I updated something and it gives me bad values. Maybe you will see what I'm doing wrong.
This is the original function from https://github.com/magiconair/map2sqlite/blob/master/map2sqlite.m
uint64_t RMTileKey(int tileZoom, int tileX, int tileY)
{
uint64_t zoom = (uint64_t) tileZoom & 0xFFLL; // 8bits, 256 levels
uint64_t x = (uint64_t) tileX & 0xFFFFFFFLL; // 28 bits
uint64_t y = (uint64_t) tileY & 0xFFFFFFFLL; // 28 bits
uint64_t key = (zoom << 56) | (x << 28) | (y << 0);
return key;
}
My buggy .net version :
public UInt64 RMTileKey(int tileZoom, int tileX, int tileY)
{
UInt64 zoom = (UInt64)tileZoom & 0xFFL; // 8bits, 256 levels
UInt64 x = (UInt64)tileX & 0xFFFFFFFL; // 28 bits
UInt64 y = (UInt64)tileY & 0xFFFFFFFL; // 28 bits
UInt64 key = (zoom << 56) | (x << 28) | (y << 0);
return key;
}
The parameters are : tileZoom = 1, tileX = 32, tileY = 1012
UInt64 key = (zoom << 56) | (x << 28) | (y << 0); gives me an incredibly big number.
Precisely :
if zoom = 1, it gives me zoom << 56 = 72057594037927936
if x = 32, it gives me (x << 28) = 8589934592
Maybe 0 was the result before ?
I checked the doc http://msdn.microsoft.com/en-us/library/f96c63ed(v=vs.110).aspx and it is said :
If the type is unsigned, they are set to 0. Otherwise, they are filled with copies of the sign bit. For left-shift operators without overflow, the statement
In my case it seems I get an overflow when using unsigned type or maybe my .net conversion is bad ?

How to get amount of 1s from 64 bit number [duplicate]

This question already has answers here:
Count number of bits in a 64-bit (long, big) integer?
(3 answers)
Closed 9 years ago.
Possible duplicate: Count number of bits in a 64-bit (long, big)
integer?
For an image comparison algorithm I get a 64bit number as result. The amount of 1s in the number (ulong) (101011011100...) tells me how similar two images are, so I need to count them. How would I best do this in C#?
I'd like to use this in a WinRT & Windows Phone App, so I'm also looking for a low-cost method.
EDIT: As I have to count the bits for a large number of Images, I'm wondering if the lookup-table-approach might be best. But I'm not really sure how that works...
The Sean Eron Anderson's Bit Twiddling Hacks has this trick, among others:
Counting bits set, in parallel
unsigned int v; // count bits set in this (32-bit value)
unsigned int c; // store the total here
static const int S[] = {1, 2, 4, 8, 16}; // Magic Binary Numbers
static const int B[] = {0x55555555, 0x33333333, 0x0F0F0F0F, 0x00FF00FF, 0x0000FFFF};
c = v - ((v >> 1) & B[0]);
c = ((c >> S[1]) & B[1]) + (c & B[1]);
c = ((c >> S[2]) + c) & B[2];
c = ((c >> S[3]) + c) & B[3];
c = ((c >> S[4]) + c) & B[4];
The B array, expressed as binary, is:
B[0] = 0x55555555 = 01010101 01010101 01010101 01010101
B[1] = 0x33333333 = 00110011 00110011 00110011 00110011
B[2] = 0x0F0F0F0F = 00001111 00001111 00001111 00001111
B[3] = 0x00FF00FF = 00000000 11111111 00000000 11111111
B[4] = 0x0000FFFF = 00000000 00000000 11111111 11111111
We can adjust the method for larger integer sizes by continuing with the patterns for the Binary Magic Numbers, B and S. If there are k bits, then we need the arrays S and B to be ceil(lg(k)) elements long, and we must compute the same number of expressions for c as S or B are long. For a 32-bit v, 16 operations are used.
The best method for counting bits in a 32-bit integer v is the following:
v = v - ((v >> 1) & 0x55555555); // reuse input as temporary
v = (v & 0x33333333) + ((v >> 2) & 0x33333333); // temp
c = ((v + (v >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; // count
The best bit counting method takes only 12 operations, which is the same as the lookup-table method, but avoids the memory and potential cache misses of a table. It is a hybrid between the purely parallel method above and the earlier methods using multiplies (in the section on counting bits with 64-bit instructions), though it doesn't use 64-bit instructions. The counts of bits set in the bytes is done in parallel, and the sum total of the bits set in the bytes is computed by multiplying by 0x1010101 and shifting right 24 bits.
A generalization of the best bit counting method to integers of bit-widths upto 128 (parameterized by type T) is this:
v = v - ((v >> 1) & (T)~(T)0/3); // temp
v = (v & (T)~(T)0/15*3) + ((v >> 2) & (T)~(T)0/15*3); // temp
v = (v + (v >> 4)) & (T)~(T)0/255*15; // temp
c = (T)(v * ((T)~(T)0/255)) >> (sizeof(T) - 1) * CHAR_BIT; // count
Something along these lines would do (note that this isn't tested code, I just wrote it here, so it may and probably will require tweaking).
int numberOfOnes = 0;
for (int i = 63; i >= 0; i--)
{
if ((yourUInt64 >> i) & 1 == 1) numberOfOnes++;
else continue;
}
Option 1 - less iterations if 64bit result < 2^63:
byte numOfOnes;
while (result != 0)
{
numOfOnes += (result & 0x1);
result = (result >> 1);
}
return numOfOnes;
Option 2 - constant number of interations - can use loop unrolling:
byte NumOfOnes;
for (int i = 0; i < 64; i++)
{
numOfOnes += (result & 0x1);
result = (result >> 1);
}
this is a 32-bit version of BitCount, you could easily extend this to 64-bit version by add one more right shift by 32, and it would be very efficient.
int bitCount(int x) {
/* first let res = x&0xAAAAAAAA >> 1 + x&55555555
* after that the (2k)th and (2k+1)th bits of the res
* will be the number of 1s that contained by the (2k)th
* and (2k+1)th bits of x
* we can use a similar way to caculate the number of 1s
* that contained by the (4k)th and (4k+1)th and (4k+2)th
* and (4k+3)th bits of x, so as 8, 16, 32
*/
int varA = (85 << 8) | 85;
varA = (varA << 16) | varA;
int res = ((x>>1) & varA) + (x & varA);
varA = (51 << 8) | 51;
varA = (varA << 16) | varA;
res = ((res>>2) & varA) + (res & varA);
varA = (15 << 8) | 15;
varA = (varA << 16) | varA;
res = ((res>>4) & varA) + (res & varA);
varA = (255 << 16) | 255;
res = ((res>>8) & varA) + (res & varA);
varA = (255 << 8) | 255;
res = ((res>>16) & varA) + (res & varA);
return res;
}

Categories

Resources