Understanding the behavior of a single ampersand operator (&) on integers - c#

I understand that the single ampersand operator is normally used for a 'bitwise AND' operation. However, can anyone help explain the interesting results you get when you use it for comparison between two numbers?
For example;
(6 & 2) = 2
(10 & 5) = 0
(20 & 25) = 16
(123 & 20) = 16
I'm not seeing any logical link between these results and I can only find information on comparing booleans or single bits.

Compare the binary representations of each of those.
110 & 010 = 010
1010 & 0101 = 0000
10100 & 11001 = 10000
1111011 & 0010100 = 0010000
In each case, a digit is 1 in the result only when it is 1 on both the left AND right side of the input.

You need to convert your numbers to binary representation and then you will see the link between results like 6 & 2= 2 is actually 110 & 010 =010 etc
10 & 5 is 1010 & 0101 = 0000

The binary and operation is performed on the integers, represented in binary. For example
110 (6)
010 (2)
--------
010 (2)

The bitwise AND is does exactly that: it does an AND operation on the Bits.
So to anticipate the result you need to look at the bits, not the numbers.
AND gives you 1, only if there's 1 in both number in the same position:
6(110) & 2(010) = 2(010)
10(1010) & 5(0101) = 0(0000)
A bitwise OR will give you 1 if there's 1 in either numbers in the same position:
6(110) | 2(010) = 6(110)
10(1010) | 5(0101) = 15(1111)

6 = 0110
2 = 0010
6 & 2 = 0010
20 = 10100
25 = 11001
20 & 25 = 10000
(looks like you're calculation is wrong for this one)
Etc...

Internally, Integers are stored in binary format. I strongly suggest you read about that. Knowing about the bitwise representation of numbers is very important.
That being said, the bitwise comparison compares the bits of the parameters:
Decimal: 6 & 2 = 2
Binary: 0110 & 0010 = 0010

Bitwize AND matches the bits in binary notation one by one and the result is the bits that are comon between the two numbers.
To convert a number to binary you need to understand the binary system.
For example
6 = 110 binary
The 110 represents 1x4 + 1x2 + 0x1 = 6.
2 then is
0x4 + 1x2 + 0x1 = 2.
Bitwize and only retains the positions where both numbers have the position set, in this case the bit for 2 and the result is then 2.
Every extra bit is double the last so a 4 bit number uses the multipliers 8, 4, 2, 1 and can there fore represent all numbers from 0 to 15 (the sum of the multipliers.)

Related

Binary AND Operations

I'm trying to do some Binary AND operations in c# based on MSDN article about & Operator
if i do:
1111 & 10 = 2 (0010) // Which is what i expect
However,
1111 & 100 = 68 (1000100) // Which is **not** what i expect.
I would have thought the output would have been 100
What am i missing?
The numbers you specified, 1111 and 100, are being treated as base 10 numbers, not binary numbers.
If you want the binary integer 1111 you should enter 15, as that's the base 10 version. So:
15 & 4 will become 4, as expected.
There is no syntax in C# for specifying an integer literal in binary. You don't have any choice but to convert them into base ten yourself, or use a runtime-conversion such as through Convert.ToInt.
In addition to using decimal, hexidecimal also works if prefixed with 0x, e.g. 0xF & 0x4
However, while there is no prefix for binary literals, you can do this:
Convert.ToInt32("1111", 2) & Convert.ToInt32("100", 2)
The output is correct.
The point is that you are considering the numbers as binary numbers. Where as the numbers here are treated as decimal integers. There is no binary constants in C# up to 6 as you can see from C# specification - integer literals and SO question C# binary literals.
Find below the explanations however.
1111 10001010111
& 10 & 00000001010
= 2 = 00000000010
1111 10001010111
& 100 & 00001100100
= 68 = 00001000100
More information on & operator can be found in MSDN.

Understanding hexadecimals and bytes in C#

I seem to lack a fundemental understanding of calculating and using hex and byte values in C# (or programming in general).
I'd like to know how to calculate hex values and bytes (0x--) from sources such as strings and RGB colors (like how do I figure out what the 0x code is for R255 G0 B0 ?)
Why do we use things like FF, is it to compensate for the base 10 system to get a number like 10?
Hexadecimal is base 16, so instead of counting from 0 to 9, we count from 0 to F. And we generally prefix hex constants with 0x. Thus,
Hex Dec
-------------
0x00 = 0
0x09 = 9
0x0A = 10
0x0F = 15
0x10 = 16
0x200 = 512
A byte is the typical unit of storage for values on a computer, and on most all modern systems, a byte contains 8 bits. Note that bit actually means binary digit, so from this, we gather that a byte has a maximum value of 11111111 binary. That is 0xFF hex, or 255 decimal. Thus, one byte can be represented by a minimum of two hexadecimal characters. A typical 4-byte int is then 8 hex characters, like 0xDEADBEEF.
RGB values are typically packed with 3 byte values, in that order, RGB. Thus,
R=255 G=0 B=0 => R=0xFF G=0x00 B=0x00 => 0xFF0000 or #FF0000 (html)
R=66 G=0 B=248 => R=0x42 G=0x00 B=0xF8 => 0x4200F8 or #4200F8 (html)
For my hex calculations, I like to use python as my calculator:
>>> a = 0x427FB
>>> b = 700
>>> a + b
273079
>>>
>>> hex(a + b)
'0x42ab7'
>>>
>>> bin(a + b)
'0b1000010101010110111'
>>>
For the RGB example, I can demonstrate how we could use bit-shifting to easily calculate those values:
>>> R=66
>>> G=0
>>> B=248
>>>
>>> hex( R<<16 | G<<8 | B )
'0x4200f8'
>>>
Base-16 (also known as hex) notation is convenient because you can fit four bits in exactly one hex digit, making conversion to binary very easy, yet not requiring as much space as a full binary notation. This is useful when you need to represent bit-oriented data in a human-readable form.
Learning hex is easy - all you need to do is memorizing a short table of 16 rows defining hex-to-binary conversion:
0 - 0000
1 - 0001
2 - 0010
3 - 0011
4 - 0100
5 - 0101
6 - 0110
7 - 0111
8 - 1000
9 - 1001
A - 1010
B - 1011
C - 1100
D - 1101
E - 1110
F - 1111
With this table in hand, you can easily convert hex strings of arbitrary length to their corresponding bit patterns:
0x478FD105 - 01000111100011111011000100000101
Converting back is easy as well: group your binary digits by four, and use the table to make hex digits
0010 1001 0100 0101 0100 1111 0101 1100 - 0x29454F5C
In decimal, each digit is weighted 10 times more than the one to the right, for example the '3' in 32 is 3 * 10, and the '1' in 102 is 1 * 100. Binary is similar except since there are only two digits (0 and 1) each bit is only weighted twice as much as the one to the right. Hexadecimal uses 16 digits - the 10 decimal digits along with the letters A = 10 to F = 15.
An n-digit decimal number can represent values up to 10^n - 1 and similarly an n-digit binary number can represent values up to 2^n - 1.
Hexadecimal is convenient since you can express a single hex digit in 4 bits since 2^4 = 16 possible values can be represented in 4 bits.
You can convert binary to hex by grouping from the right 4 bits at a time and converting each group to the corresponding hex. For example 1011100 -> (101)(1100) -> 5C
The conversion from hex to binary is even simpler since you can simply expand each hex digit into the corresponding binary, for example 0xA4 -> 1010 0100
The answer to the actual question posted ("Why do we use things like FF, is it to compensate for the base 10 system to get a number like 10?") is this: Computer use bits, that means either 1 or 0.
The essence is similar to what Lee posted and called "positional notation". In a decimal number, each position in the number refers to a power of 10. For example, in the number 123, the last position represents 10^0 -- the ones. The middle position represents 10^1 -- the tens. And the first is 10^2 -- the hundreds. So the number "123" represents 1 * 100 + 2 * 10 + 3 * 1 = 123.
Numbers in binary use the same system. The number 10 (base 2) represents 1 * 2^1 + 0 * 2^0 = 2.
If you want to express the decimal number 10 in binary, you get the number 1010. That means, you need four bits to represent a single decimal digit.
But with four bits you can represent up to 16 different values, not just 10 different values. If you need four bits per digit, you might as well use numbers in the base 16 instead of only base 10. That's where hexadecimal comes into play.
Regarding how to convert ARGB values; as been written in other replies, converting between binary and hexadecimal is comparatively easy (4 binary digits = 1 hex digit).
Converting between decimal and hex is more involving and at least to me it's been easier (if i have to do it in my head) to first convert the decimal into binary representation, and then the binary number into hex. Google probably has tons of how-tos and algorithms for that.

convert number to combination of letters

I need to convert a number between 1 and 6000000 to a letter combination like ABCDE.
Less letters is better. but i'm guessing i will need 4 or 5.
Can someone point me in the right direction as how to write an algorithm to convert numbers to letters and back? only A-Z. (caps).
You need to convert to base-26 numbering: 0 is A, 1 is B, 25 is Z, 26 is BA, etc.
The Hexavigesimal Wikipedia article has code for conversion to base 26.
There are 26 letters in the alphabet.
TYou have 26^4 < 6 000 000 and 26^5 > 6 000 000
Then you will need 5 letters, for most of your elements
Now you just need to express your number in base 26.
Their is only one way to write an X in 0 ... 6 000 000 as follow:
X = a4*26^4 + a3*26^3+ a2*26^2+ a1*26^1+a0
ai in {0,...25} then you just map ai with a letter from A to Z
The most naive thing to do would be to let A,B,...,Z represent the numbers 0,1,...,25 and the just convert your number to base 26 to get the alphabetic conversion.
For example, there is a C# implementation in this answer to this post.
Well if you want to convert from the decimal representation, then there are 10 digits [0-9] and if you want to have one character per decimal digit in the result, then you will need ten alpha characters. But if you convert from the binary representaion, just replace every 0 with an 'A' and every 1 with a 'B'...
everything depends on how you want to do it... The base you decide to use will determine how many letters you will need.
as an example, to do it from a binary representation,
take the number mod 2. If the result is 0 add an 'A' if its a 1, add a 'B'
Divide the number by 2 (or rightshift it one position.)
repeat until number is zero.
start with value of 57
1. 57 Mod 2 = 1 A
2. 57 / 2 = 28
3. 28 Mod 2 = 0 BA
4. 28 / 2 = 14
5. 14 mod 2 = 0 BBA
6. 14 / 2 = 7
7. 7 mod 2 = 1 ABBA --- A musical group !
8. 7 / 2 = 3
9. 3 mod 2 = 1 AABBA
10. 3/ 2 = 1
11. 1 mod 2 = 1 AAABBA
12. 1 / 2 = 0 --- -done
You should equate A = 0, B = 1 and so on upto Z = 25.
This would become a number system with the base (or radix) 26.
With this in mind, two digits can represent numbers ranging from 0 - 675 (ZZ = 675).
3 Digits would represent 26^3. i.e 0 - 17575.
With 5 digits you can represent from 0 - 11881375 (ZZZZZ).
You can take any standard algorithm that converts between decimal to its own radix to do that.
Conversion between Number bases can be referenced for help.

How to solve this bit operation?

I have one byte in with I need to replace last (least important) bits.
Example below.
Original byte: xxxx0110
Replacement byte: 1111
What I want to get: xxxx1111
Original byte: xxxx1111
Replacement byte: 0000
What I want to get: xxxx0000
Original byte: xxxx0000
Replacement byte: 1111
What I want to get: xxxx1111
Original byte: xxxx1010
Replacement byte: 1111
What I want to get: xxxx1111
Original byte: xxxx0101
Replacement byte: 0111
What I want to get: xxxx0111
value = (byte)( (value & ~15) | newByte);
The ~15 creates a mask of everything except the last 4 bits; value & {that mask} takes the last 4 bits away, then | newByte puts the bits from the new data in their place.
This can be done with a combination of bitwise AND to clear the bits and bitwise OR to set the bits.
To clear the lowest four bits, you can AND with a value that is 1 everywhere except at those bits, where it's zero. One value like this would be ~0xF, which is the complement of 0xF, which is four ones: 0b1111.
To set the bits, you can then use bitwise OR with the bits to set. Since 0 OR x = x, this works as you'd intend it.
The net result would be
(x & ~0xF) | bits
EDIT: As per Eamon Nerbonne's comment, you should then cast back to a byte:
(byte)((x & ~0xF) | bits)
If my understanding is right, you want to OR your byte (after left shift 4 times) with the replacement byte(left shift 4 times, too). Then right shift 4 times and you will get the desired result.
For example: a = 1001 1101
Replacement byte: 0000 1011
Left shift a 4 times: 1101 0000
Left shift replacement 4 times: 1011 0000
OR result: 1111
Right shift 4 times: 1011 (end result).
Maybe this link is helpful: http://www.codeproject.com/KB/cs/leftrightshift.aspx
trim the last 4 bits. and append the new ones.

Why AND two numbers to get a Boolean?

I am working on a little Hardware interface project based on the Velleman k8055 board.
The example code comes in VB.Net and I'm rewriting this into C#, mostly to have a chance to step through the code and make sense of it all.
One thing has me baffled though:
At one stage they read all digital inputs and then set a checkbox based on the answer to the read digital inputs (which come back in an Integer) and then they AND this with a number:
i = ReadAllDigital
cbi(1).Checked = (i And 1)
cbi(2).Checked = (i And 2) \ 2
cbi(3).Checked = (i And 4) \ 4
cbi(4).Checked = (i And 8) \ 8
cbi(5).Checked = (i And 16) \ 16
I have not done Digital systems in a while and I understand what they are trying to do but what effect would it have to AND two numbers? Doesn't everything above 0 equate to true?
How would you translate this to C#?
This is doing a bitwise AND, not a logical AND.
Each of those basically determines whether a single bit in i is set, for instance:
5 AND 4 = 4
5 AND 2 = 0
5 AND 1 = 1
(Because 5 = binary 101, and 4, 2 and 1 are the decimal values of binary 100, 010 and 001 respectively.)
I think you 'll have to translate it to this:
i & 1 == 1
i & 2 == 2
i & 4 == 4
etc...
This is using the bitwise AND operator.
When you use the bitwise AND operator, this operator will compare the binary representation of the two given values, and return a binary value where only those bits are set, that are also set in the two operands.
For instance, when you do this:
2 & 2
It will do this:
0010 & 0010
And this will result in:
0010
0010
&----
0010
Then if you compare this result with 2 (0010), it will ofcourse return true.
Just to add:
It's called bitmasking
http://en.wikipedia.org/wiki/Mask_(computing)
A boolean only require 1 bit. In the implementation most programming language, a boolean takes more than a single bit. In PC this won't be a big waste, but embedded system usually have very limited memory space, so the waste is really significant. To save space, the booleans are packed together, this way a boolean variable only takes up 1 bit.
You can think of it as doing something like an array indexing operation, with a byte (= 8 bits) becoming like an array of 8 boolean variables, so maybe that's your answer: use an array of booleans.
Think of this in binary e.g.
10101010
AND
00000010
yields 00000010
i.e. not zero. Now if the first value was
10101000
you'd get
00000000
i.e. zero.
Note the further division to reduce everything to 1 or 0.
(i and 16) / 16 extracts the value (1 or 0) of the 5th bit.
1xxxx and 16 = 16 / 16 = 1
0xxxx and 16 = 0 / 16 = 0
And operator performs "...bitwise conjunction on two numeric expressions", which maps to '|' in C#. The '` is an integer division, and equivalent in C# is /, provided that both operands are integer types.
The constant numbers are masks (think of them in binary). So what the code does is apply the bitwise AND operator on the byte and the mask and divide by the number, in order to get the bit.
For example:
xxxxxxxx & 00000100 = 00000x000
if x == 1
00000x00 / 00000100 = 000000001
else if x == 0
00000x00 / 00000100 = 000000000
In C# use the BitArray class to directly index individual bits.
To set an individual bit i is straightforward:
b |= 1 << i;
To reset an individual bit i is a little more awkward:
b &= ~(1 << i);
Be aware that both the bitwise operators and the shift operators tend to promote everything to int which may unexpectedly require casting.
As said this is a bitwise AND, not a logical AND. I do see that this has been said quite a few times before me, but IMO the explanations are not so easy to understand.
I like to think of it like this:
Write up the binary numbers under each other (here I'm doing 5 and 1):
101
001
Now we need to turn this into a binary number, where all the 1's from the 1st number, that is also in the second one gets transfered, that is - in this case:
001
In this case we see it gives the same number as the 2nd number, in which this operation (in VB) returns true. Let's look at the other examples (using 5 as i):
(5 and 2)
101
010
----
000
(false)
(5 and 4)
101
100
---
100
(true)
(5 and 8)
0101
1000
----
0000
(false)
(5 and 16)
00101
10000
-----
00000
(false)
EDIT: and obviously I miss the entire point of the question - here's the translation to C#:
cbi[1].Checked = i & 1 == 1;
cbi[2].Checked = i & 2 == 2;
cbi[3].Checked = i & 4 == 4;
cbi[4].Checked = i & 8 == 8;
cbi[5].Checked = i & 16 == 16;
I prefer to use hexadecimal notation when bit twiddling (e.g. 0x10 instead of 16). It makes more sense as you increase your bit depths as 0x20000 is better than 131072.

Categories

Resources