I'm in a computer systems course and have been struggling, in part, with two's complement. I want to understand it, but everything I've read hasn't brought the picture together for me. I've read the Wikipedia article and various other articles, including my text book.
What is two's complement, how can we use it and how can it affect numbers during operations like casts (from signed to unsigned and vice versa), bit-wise operations and bit-shift operations?
Two's complement is a clever way of storing integers so that common math problems are very simple to implement.
To understand, you have to think of the numbers in binary.
It basically says,
for zero, use all 0's.
for positive integers, start counting up, with a maximum of 2(number of bits - 1)-1.
for negative integers, do exactly the same thing, but switch the role of 0's and 1's and count down (so instead of starting with 0000, start with 1111 - that's the "complement" part).
Let's try it with a mini-byte of 4 bits (we'll call it a nibble - 1/2 a byte).
0000 - zero
0001 - one
0010 - two
0011 - three
0100 to 0111 - four to seven
That's as far as we can go in positives. 23-1 = 7.
For negatives:
1111 - negative one
1110 - negative two
1101 - negative three
1100 to 1000 - negative four to negative eight
Note that you get one extra value for negatives (1000 = -8) that you don't for positives. This is because 0000 is used for zero. This can be considered as Number Line of computers.
Distinguishing between positive and negative numbers
Doing this, the first bit gets the role of the "sign" bit, as it can be used to distinguish between nonnegative and negative decimal values. If the most significant bit is 1, then the binary can be said to be negative, where as if the most significant bit (the leftmost) is 0, you can say the decimal value is nonnegative.
"Sign-magnitude" negative numbers just have the sign bit flipped of their positive counterparts, but this approach has to deal with interpreting 1000 (one 1 followed by all 0s) as "negative zero" which is confusing.
"Ones' complement" negative numbers are just the bit-complement of their positive counterparts, which also leads to a confusing "negative zero" with 1111 (all ones).
You will likely not have to deal with Ones' Complement or Sign-Magnitude integer representations unless you are working very close to the hardware.
I wonder if it could be explained any better than the Wikipedia article.
The basic problem that you are trying to solve with two's complement representation is the problem of storing negative integers.
First, consider an unsigned integer stored in 4 bits. You can have the following
0000 = 0
0001 = 1
0010 = 2
...
1111 = 15
These are unsigned because there is no indication of whether they are negative or positive.
Sign Magnitude and Excess Notation
To store negative numbers you can try a number of things. First, you can use sign magnitude notation which assigns the first bit as a sign bit to represent +/- and the remaining bits to represent the magnitude. So using 4 bits again and assuming that 1 means - and 0 means + then you have
0000 = +0
0001 = +1
0010 = +2
...
1000 = -0
1001 = -1
1111 = -7
So, you see the problem there? We have positive and negative 0. The bigger problem is adding and subtracting binary numbers. The circuits to add and subtract using sign magnitude will be very complex.
What is
0010
1001 +
----
?
Another system is excess notation. You can store negative numbers, you get rid of the two zeros problem but addition and subtraction remains difficult.
So along comes two's complement. Now you can store positive and negative integers and perform arithmetic with relative ease. There are a number of methods to convert a number into two's complement. Here's one.
Convert Decimal to Two's Complement
Convert the number to binary (ignore the sign for now)
e.g. 5 is 0101 and -5 is 0101
If the number is a positive number then you are done.
e.g. 5 is 0101 in binary using two's complement notation.
If the number is negative then
3.1 find the complement (invert 0's and 1's)
e.g. -5 is 0101 so finding the complement is 1010
3.2 Add 1 to the complement 1010 + 1 = 1011.
Therefore, -5 in two's complement is 1011.
So, what if you wanted to do 2 + (-3) in binary? 2 + (-3) is -1.
What would you have to do if you were using sign magnitude to add these numbers? 0010 + 1101 = ?
Using two's complement consider how easy it would be.
2 = 0010
-3 = 1101 +
-------------
-1 = 1111
Converting Two's Complement to Decimal
Converting 1111 to decimal:
The number starts with 1, so it's negative, so we find the complement of 1111, which is 0000.
Add 1 to 0000, and we obtain 0001.
Convert 0001 to decimal, which is 1.
Apply the sign = -1.
Tada!
Like most explanations I've seen, the ones above are clear about how to work with 2's complement, but don't really explain what they are mathematically. I'll try to do that, for integers at least, and I'll cover some background that's probably familiar first.
Recall how it works for decimal: 2345 is a way of writing 2 × 103 + 3 × 102 + 4 × 101 + 5 × 100.
In the same way, binary is a way of writing numbers using just 0 and 1 following the same general idea, but replacing those 10s above with 2s. Then in binary, 1111is a way of writing 1 × 23 + 1 × 22 + 1 × 21 + 1 × 20and if you work it out, that turns out to equal 15 (base 10). That's because it is 8+4+2+1 = 15.
This is all well and good for positive numbers. It even works for negative numbers if you're willing to just stick a minus sign in front of them, as humans do with decimal numbers. That can even be done in computers, sort of, but I haven't seen such a computer since the early 1970's. I'll leave the reasons for a different discussion.
For computers it turns out to be more efficient to use a complement representation for negative numbers. And here's something that is often overlooked. Complement notations involve some kind of reversal of the digits of the number, even the implied zeroes that come before a normal positive number. That's awkward, because the question arises: all of them? That could be an infinite number of digits to be considered.
Fortunately, computers don't represent infinities. Numbers are constrained to a particular length (or width, if you prefer). So let's return to positive binary numbers, but with a particular size. I'll use 8 digits ("bits") for these examples. So our binary number would really be 00001111or 0 × 27 + 0 × 26 + 0 × 25 + 0 × 24 + 1 × 23 + 1 × 22 + 1 × 21 + 1 × 20
To form the 2's complement negative, we first complement all the (binary) digits to form 11110000and add 1 to form 11110001but how are we to understand that to mean -15?
The answer is that we change the meaning of the high-order bit (the leftmost one). This bit will be a 1 for all negative numbers. The change will be to change the sign of its contribution to the value of the number it appears in. So now our 11110001 is understood to represent -1 × 27 + 1 × 26 + 1 × 25 + 1 × 24 + 0 × 23 + 0 × 22 + 0 × 21 + 1 × 20Notice that "-" in front of that expression? It means that the sign bit carries the weight -27, that is -128 (base 10). All the other positions retain the same weight they had in unsigned binary numbers.
Working out our -15, it is -128 + 64 + 32 + 16 + 1 Try it on your calculator. it's -15.
Of the three main ways that I've seen negative numbers represented in computers, 2's complement wins hands down for convenience in general use. It has an oddity, though. Since it's binary, there have to be an even number of possible bit combinations. Each positive number can be paired with its negative, but there's only one zero. Negating a zero gets you zero. So there's one more combination, the number with 1 in the sign bit and 0 everywhere else. The corresponding positive number would not fit in the number of bits being used.
What's even more odd about this number is that if you try to form its positive by complementing and adding one, you get the same negative number back. It seems natural that zero would do this, but this is unexpected and not at all the behavior we're used to because computers aside, we generally think of an unlimited supply of digits, not this fixed-length arithmetic.
This is like the tip of an iceberg of oddities. There's more lying in wait below the surface, but that's enough for this discussion. You could probably find more if you research "overflow" for fixed-point arithmetic. If you really want to get into it, you might also research "modular arithmetic".
2's complement is very useful for finding the value of a binary, however I thought of a much more concise way of solving such a problem(never seen anyone else publish it):
take a binary, for example: 1101 which is [assuming that space "1" is the sign] equal to -3.
using 2's complement we would do this...flip 1101 to 0010...add 0001 + 0010 ===> gives us 0011. 0011 in positive binary = 3. therefore 1101 = -3!
What I realized:
instead of all the flipping and adding, you can just do the basic method for solving for a positive binary(lets say 0101) is (23 * 0) + (22 * 1) + (21 * 0) + (20 * 1) = 5.
Do exactly the same concept with a negative!(with a small twist)
take 1101, for example:
for the first number instead of 23 * 1 = 8 , do -(23 * 1) = -8.
then continue as usual, doing -8 + (22 * 1) + (21 * 0) + (20 * 1) = -3
Imagine that you have a finite number of bits/trits/digits/whatever. You define 0 as all digits being 0, and count upwards naturally:
00
01
02
..
Eventually you will overflow.
98
99
00
We have two digits and can represent all numbers from 0 to 100. All those numbers are positive! Suppose we want to represent negative numbers too?
What we really have is a cycle. The number before 2 is 1. The number before 1 is 0. The number before 0 is... 99.
So, for simplicity, let's say that any number over 50 is negative. "0" through "49" represent 0 through 49. "99" is -1, "98" is -2, ... "50" is -50.
This representation is ten's complement. Computers typically use two's complement, which is the same except using bits instead of digits.
The nice thing about ten's complement is that addition just works. You do not need to do anything special to add positive and negative numbers!
I read a fantastic explanation on Reddit by jng, using the odometer as an analogy.
It is a useful convention. The same circuits and logic operations that
add / subtract positive numbers in binary still work on both positive
and negative numbers if using the convention, that's why it's so
useful and omnipresent.
Imagine the odometer of a car, it rolls around at (say) 99999. If you
increment 00000 you get 00001. If you decrement 00000, you get 99999
(due to the roll-around). If you add one back to 99999 it goes back to
00000. So it's useful to decide that 99999 represents -1. Likewise, it is very useful to decide that 99998 represents -2, and so on. You have
to stop somewhere, and also by convention, the top half of the numbers
are deemed to be negative (50000-99999), and the bottom half positive
just stand for themselves (00000-49999). As a result, the top digit
being 5-9 means the represented number is negative, and it being 0-4
means the represented is positive - exactly the same as the top bit
representing sign in a two's complement binary number.
Understanding this was hard for me too. Once I got it and went back to
re-read the books articles and explanations (there was no internet
back then), it turned out a lot of those describing it didn't really
understand it. I did write a book teaching assembly language after
that (which did sell quite well for 10 years).
Two complement is found out by adding one to 1'st complement of the given number.
Lets say we have to find out twos complement of 10101 then find its ones complement, that is, 01010 add 1 to this result, that is, 01010+1=01011, which is the final answer.
Lets get the answer 10 – 12 in binary form using 8 bits:
What we will really do is 10 + (-12)
We need to get the compliment part of 12 to subtract it from 10.
12 in binary is 00001100.
10 in binary is 00001010.
To get the compliment part of 12 we just reverse all the bits then add 1.
12 in binary reversed is 11110011. This is also the Inverse code (one's complement).
Now we need to add one, which is now 11110100.
So 11110100 is the compliment of 12! Easy when you think of it this way.
Now you can solve the above question of 10 - 12 in binary form.
00001010
11110100
-----------------
11111110
Looking at the two's complement system from a math point of view it really makes sense. In ten's complement, the idea is to essentially 'isolate' the difference.
Example: 63 - 24 = x
We add the complement of 24 which is really just (100 - 24). So really, all we are doing is adding 100 on both sides of the equation.
Now the equation is: 100 + 63 - 24 = x + 100, that is why we remove the 100 (or 10 or 1000 or whatever).
Due to the inconvenient situation of having to subtract one number from a long chain of zeroes, we use a 'diminished radix complement' system, in the decimal system, nine's complement.
When we are presented with a number subtracted from a big chain of nines, we just need to reverse the numbers.
Example: 99999 - 03275 = 96724
That is the reason, after nine's complement, we add 1. As you probably know from childhood math, 9 becomes 10 by 'stealing' 1. So basically it's just ten's complement that takes 1 from the difference.
In Binary, two's complement is equatable to ten's complement, while one's complement to nine's complement. The primary difference is that instead of trying to isolate the difference with powers of ten (adding 10, 100, etc. into the equation) we are trying to isolate the difference with powers of two.
It is for this reason that we invert the bits. Just like how our minuend is a chain of nines in decimal, our minuend is a chain of ones in binary.
Example: 111111 - 101001 = 010110
Because chains of ones are 1 below a nice power of two, they 'steal' 1 from the difference like nine's do in decimal.
When we are using negative binary number's, we are really just saying:
0000 - 0101 = x
1111 - 0101 = 1010
1111 + 0000 - 0101 = x + 1111
In order to 'isolate' x, we need to add 1 because 1111 is one away from 10000 and we remove the leading 1 because we just added it to the original difference.
1111 + 1 + 0000 - 0101 = x + 1111 + 1
10000 + 0000 - 0101 = x + 10000
Just remove 10000 from both sides to get x, it's basic algebra.
The word complement derives from completeness. In the decimal world the numerals 0 through 9 provide a complement (complete set) of numerals or numeric symbols to express all decimal numbers. In the binary world the numerals 0 and 1 provide a complement of numerals to express all binary numbers. In fact The symbols 0 and 1 must be used to represent everything (text, images, etc) as well as positive (0) and negative (1).
In our world the blank space to the left of number is considered as zero:
35=035=000000035.
In a computer storage location there is no blank space. All bits (binary digits) must be either 0 or 1. To efficiently use memory numbers may be stored as 8 bit, 16 bit, 32 bit, 64 bit, 128 bit representations. When a number that is stored as an 8 bit number is transferred to a 16 bit location the sign and magnitude (absolute value) must remain the same. Both 1's complement and 2's complement representations facilitate this.
As a noun:
Both 1's complement and 2's complement are binary representations of signed quantities where the most significant bit (the one on the left) is the sign bit. 0 is for positive and 1 is for negative.
2s complement does not mean negative. It means a signed quantity. As in decimal the magnitude is represented as the positive quantity. The structure uses sign extension to preserve the quantity when promoting to a register [] with more bits:
[0101]=[00101]=[00000000000101]=5 (base 10)
[1011]=[11011]=[11111111111011]=-5(base 10)
As a verb:
2's complement means to negate. It does not mean make negative. It means if negative make positive; if positive make negative. The magnitude is the absolute value:
if a >= 0 then |a| = a
if a < 0 then |a| = -a = 2scomplement of a
This ability allows efficient binary subtraction using negate then add.
a - b = a + (-b)
The official way to take the 1's complement is for each digit subtract its value from 1.
1'scomp(0101) = 1010.
This is the same as flipping or inverting each bit individually. This results in a negative zero which is not well loved so adding one to te 1's complement gets rid of the problem.
To negate or take the 2s complement first take the 1s complement then add 1.
Example 1 Example 2
0101 --original number 1101
1's comp 1010 0010
add 1 0001 0001
2's comp 1011 --negated number 0011
In the examples the negation works as well with sign extended numbers.
Adding:
1110 Carry 111110 Carry
0110 is the same as 000110
1111 111111
sum 0101 sum 000101
SUbtracting:
1110 Carry 00000 Carry
0110 is the same as 00110
-0111 +11001
---------- ----------
sum 0101 sum 11111
Notice that when working with 2's complement, blank space to the left of the number is filled with zeros for positive numbers butis filled with ones for negative numbers. The carry is always added and must be either a 1 or 0.
Cheers
2's complement is essentially a way of coming up with the additive inverse of a binary number. Ask yourself this: Given a number in binary form (present at a fixed length memory location), what bit pattern, when added to the original number (at the fixed length memory location), would make the result all zeros ? (at the same fixed length memory location). If we could come up with this bit pattern then that bit pattern would be the -ve representation (additive inverse) of the original number; as by definition adding a number to its additive inverse always results in zero. Example: take 5 which is 101 present inside a single 8 bit byte. Now the task is to come up with a bit pattern which when added to the given bit pattern (00000101) would result in all zeros at the memory location which is used to hold this 5 i.e. all 8 bits of the byte should be zero. To do that, start from the right most bit of 101 and for each individual bit, again ask the same question: What bit should I add to the current bit to make the result zero ? continue doing that taking in account the usual carry over. After we are done with the 3 right most places (the digits that define the original number without regard to the leading zeros) the last carry goes in the bit pattern of the additive inverse. Furthermore, since we are holding in the original number in a single 8 bit byte, all other leading bits in the additive inverse should also be 1's so that (and this is important) when the computer adds "the number" (represented using the 8 bit pattern) and its additive inverse using "that" storage type (a byte) the result in that byte would be all zeros.
1 1 1
----------
1 0 1
1 0 1 1 ---> additive inverse
---------
0 0 0
Many of the answers so far nicely explain why two's complement is used to represent negative numbers, but do not tell us what two's complement number is, particularly not why a '1' is added, and in fact often added in a wrong way.
The confusion comes from a poor understanding of the definition of a complement number. A complement is the missing part that would make something complete.
The radix complement of an n digit number x in radix b is, by definition, b^n-x.
In binary 4 is represented by 100, which has 3 digits (n=3) and a radix of 2 (b=2). So its radix complement is b^n-x = 2^3-4=8-4=4 (or 100 in binary).
However, in binary obtaining a radix's complement is not as easy as getting its diminished radix complement, which is defined as (b^n-1)-y, just 1 less than that of radix complement. To get a diminished radix complement, you simply flip all the digits.
100 -> 011 (diminished (one's) radix complement)
to obtain the radix (two's) complement, we simply add 1, as the definition defined.
011 +1 ->100 (two's complement).
Now with this new understanding, let's take a look of the example given by Vincent Ramdhanie (see above second response):
Converting 1111 to decimal:
The number starts with 1, so it's negative, so we find the complement of 1111, which is 0000.
Add 1 to 0000, and we obtain 0001.
Convert 0001 to decimal, which is 1.
Apply the sign = -1.
Tada!
Should be understood as:
The number starts with 1, so it's negative. So we know it is a two's complement of some value x. To find the x represented by its two's complement, we first need find its 1's complement.
two's complement of x: 1111
one's complement of x: 1111-1 ->1110;
x = 0001, (flip all digits)
Apply the sign -, and the answer =-x =-1.
I liked lavinio's answer, but shifting bits adds some complexity. Often there's a choice of moving bits while respecting the sign bit or while not respecting the sign bit. This is the choice between treating the numbers as signed (-8 to 7 for a nibble, -128 to 127 for bytes) or full-range unsigned numbers (0 to 15 for nibbles, 0 to 255 for bytes).
It is a clever means of encoding negative integers in such a way that approximately half of the combination of bits of a data type are reserved for negative integers, and the addition of most of the negative integers with their corresponding positive integers results in a carry overflow that leaves the result to be binary zero.
So, in 2's complement if one is 0x0001 then -1 is 0x1111, because that will result in a combined sum of 0x0000 (with an overflow of 1).
2’s Complements: When we add an extra one with the 1’s complements of a number we will get the 2’s complements. For example: 100101 it’s 1’s complement is 011010 and 2’s complement is 011010+1 = 011011 (By adding one with 1's complement) For more information
this article explain it graphically.
Two's complement is mainly used for the following reasons:
To avoid multiple representations of 0
To avoid keeping track of carry bit (as in one's complement) in case of an overflow.
Carrying out simple operations like addition and subtraction becomes easy.
Two's complement is one of the ways of expressing a negative number and most of the controllers and processors store a negative number in two's complement form.
In simple terms, two's complement is a way to store negative numbers in computer memory. Whereas positive numbers are stored as a normal binary number.
Let's consider this example,
The computer uses the binary number system to represent any number.
x = 5;
This is represented as 0101.
x = -5;
When the computer encounters the - sign, it computes its two's complement and stores it.
That is, 5 = 0101 and its two's complement is 1011.
The important rules the computer uses to process numbers are,
If the first bit is 1 then it must be a negative number.
If all the bits except first bit are 0 then it is a positive number, because there is no -0 in number system (1000 is not -0 instead it is positive 8).
If all the bits are 0 then it is 0.
Else it is a positive number.
To bitwise complement a number is to flip all the bits in it. To two’s complement it, we flip all the bits and add one.
Using 2’s complement representation for signed integers, we apply the 2’s complement operation to convert a positive number to its negative equivalent and vice versa. So using nibbles for an example, 0001 (1) becomes 1111 (-1) and applying the op again, returns to 0001.
The behaviour of the operation at zero is advantageous in giving a single representation for zero without special handling of positive and negative zeroes. 0000 complements to 1111, which when 1 is added. overflows to 0000, giving us one zero, rather than a positive and a negative one.
A key advantage of this representation is that the standard addition circuits for unsigned integers produce correct results when applied to them. For example adding 1 and -1 in nibbles: 0001 + 1111, the bits overflow out of the register, leaving behind 0000.
For a gentle introduction, the wonderful Computerphile have produced a video on the subject.
The question is 'What is “two's complement”?'
The simple answer for those wanting to understand it theoretically (and me seeking to complement the other more practical answers): 2's complement is the representation for negative integers in the dual system that does not require additional characters, such as + and -.
Two's complement of a given number is the number got by adding 1 with the ones' complement of the number.
Suppose, we have a binary number: 10111001101
Its 1's complement is: 01000110010
And its two's complement will be: 01000110011
Reference: Two's Complement (Thomas Finley)
I invert all the bits and add 1. Programmatically:
// In C++11
int _powers[] = {
1,
2,
4,
8,
16,
32,
64,
128
};
int value = 3;
int n_bits = 4;
int twos_complement = (value ^ ( _powers[n_bits]-1)) + 1;
You can also use an online calculator to calculate the two's complement binary representation of a decimal number: http://www.convertforfree.com/twos-complement-calculator/
The simplest answer:
1111 + 1 = (1)0000. So 1111 must be -1. Then -1 + 1 = 0.
It's perfect to understand these all for me.
I'm writing some performance-sensitive C# code that deals with character comparisons. I recently discovered a trick where you can tell if a char is equal to one or more others without branching, if the difference between them is a power of 2.
For example, say you want to check if a char is U+0020 (space) or U+00A0 (non-breaking space). Since the difference between the two is 0x80, you can do this:
public static bool Is20OrA0(char c) => (c | 0x80) == 0xA0;
as opposed to this naive implementation, which would add an additional branch if the character was not a space:
public static bool Is20OrA0(char c) => c == 0x20 || c == 0xA0;
How the first one works is since the difference between the two chars is a power of 2, it has exactly one bit set. So that means when you OR it with the character and it leads to a certain result, there are exactly 2 ^ 1 different characters that could have lead to that result.
Anyway, my question is, can this trick somehow be extended to characters with differences that aren't multiples of 2? For example, if I had the characters # and 0 (which have a difference of 13, by the way), is there any sort of bit-twiddling hack I could use to check if a char was equal to either of them, without branching?
Thanks for your help.
edit: For reference, here is where I first stumbled across this trick in the .NET Framework source code, in char.IsLetter. They take advantage of the fact that a - A == 97 - 65 == 32, and simply OR it with 0x20 to uppercase the char (as opposed to calling ToUpper).
If you can tolerate a multiply instead of a branch, and the values you are testing against only occupy the lower bits of the data type you are using (and therefore won't overflow when multiplied by a smallish constant, consider casting to a larger data type and using a correspondingly larger mask value if this is an issue), then you could multiply the value by a constant to force the two values to be a power of 2 apart.
For example, in the case of # and 0 (decimal values 35 and 48), the values are 13 apart. Rounding down, the nearest power of 2 to 13 is 8, which is 0.615384615 of 13. Multiplying this by 256 and rounding up, to give an 8.8 fixed point value gives 158.
Here are the binary values for 35 and 48, multiplied by 158, and their neighbours:
34 * 158 = 5372 = 0001 0100 1111 1100
35 * 158 = 5530 = 0001 0101 1001 1010
36 * 158 = 5688 = 0001 0110 0011 1000
47 * 158 = 7426 = 0001 1101 0000 0010
48 * 158 = 7548 = 0001 1101 1010 0000
49 * 158 = 7742 = 0001 1110 0011 1110
The lower 7 bits can be ignored because they aren't necessary in order to separate any of the neighbouring values from each other, and apart from that, the values 5530 and 7548 only differ in bit 11, so you can use the mask and compare technique, but using an AND instead of an OR. The mask value in binary is 1111 0111 1000 0000 (63360) and the compare value is 0001 0101 1000 0000 (5504), so you can use this code:
public static bool Is23Or30(char c) => ((c * 158) & 63360) == 5504;
I haven't profiled this, so I can't promise it's faster than a simple compare.
If you do implement something like this, be sure to write some test code that loops through every possible value that can be passed to the function, to verify that it works as expected.
You can use the same trick to compare against a set of 2^N values provided that they have all other bits equal except N bits. E.g if the set of values is 0x01, 0x03, 0x81, 0x83 then N=2 and you can use (c | 0x82) == 0x83. Note that the values in the set differ only in bits 1 and/or 7. All other bits are equal. There are not many cases where this kind of optimization can be applied, but when it can and every little bit of extra speed counts, its a good optimization.
This is the same way boolean expressions are optimized (e.g. when compiling VHDL). You may also want to look up Karnaugh maps.
That being said, it is really bad practice to do this kind of comparisons on character values especially with Unicode, unless you know what you are doing and are doing really low level stuff (such as drivers, kernel code etc). Comparing characters (as opposed to bytes) has to take into account the linguistic features (such as uppercase/lowercase, ligatures, accents, composited characters etc)
On the other hand if all you need is binary comparison (or classification) you can use lookup tables. With single byte character sets these can be reasonably small and really fast.
If not having branches is really your main concern, you can do something like this:
if ( (x-c0|c0-x) & (x-c1|c1-x) & ... & (x-cn|cn-x) & 0x80) {
// x is not equal to any ci
If x is not equal to a specific c, either x-c or c-x will be negative, so x-c|c-x will have bit 7 set. This should work for signed and unsigned chars alike. If you & it for all c's, the result will have bit 7 set only if it's set for every c (i.e. x is not equal to any of them)
I want to create a Byte array in C# and the first and second bytes have to be 70 and 75 respectively:
So I did something like following:
List<byte> retval = new List<byte>();
retval.Add(Convert.ToByte(75));
retval.Add(Convert.ToByte(70));
I thought the function will convert the numbers into byte and if I put watch on the arrayList at runtime, then it would look a little different, but it did not change. I was expecting to see values in format of 0x00 something but it still looks like raw integers.
Am I missing something?
Right-click your Watch Window or Immediate Window and check Hexadecimal Display option. It is by default set to "unchecked" for bytes, because Visual Studio IDE assumes that you would prefer to read int values instead of bytes:
A byte is an integeral value and it, along with all the other integral types, don't look like anything until you convert them into a representation. All they are are electrical charges on a silicon chip.
byte, ushort, uint and ulong are all unsigned integral types of varying lengths (1, 2, 4 and 8 octets respectively). They are stored as binary (base two) numbers, with each bit representing a power of two, with the low-order (logically, the right-most) bit representing 20 or 1 and the high-order bit representing the highest power of two possible for that size (respectively: 27, 215, , 231 and 263.
sbyte, short, int and long are signed integral types of varying lengths (1, 2, 4 and 8 octets respectively). Internally, however, they are stored in what's called two's-complement notation
http://en.wikipedia.org/wiki/Two's_complement
http://www.cs.cornell.edu/~tomf/notes/cps104/twoscomp.html
That means, the high order bit is the sign bit (0 for non-negative, 1 for negative). Non-negative values ( x >= 0) are represented as above, except the largest power of two that can be represented is, respectively 26, 214 , 230 and 262, since the high order bit is reserved for the sign.
Negative numbers are stored as the two's complement of the absolute value. That allow subtraction to be performed by addition circuits, subtraction being the addition of the negative. To get the two's-complement of a number, the following process is performed:
Invert the bits
Add 1, propagating any carries in the usual way, to obtain the negative form of the integer.
So the negative form of 1 is
binary representation of +1: 0000 0000 0000 0001
invert the bits (complement): 1111 1111 1111 1110
add 1: 1111 1111 1111 1111
If you add the positive and negative forms, the result is zero:
0000 0000 0000 0001
1111 1111 1111 1111
-------------------
0000 0000 0000 0000
And the CPUs carry or integer overflow flag will be set.
Zero remains the same as zero really doesn't have a sign
binary zero: 0000 0000 0000 0000
inverted (complement: 1111 1111 1111 1111
add 1: 0000 0000 0000 0000
And, change the sign of the smallest negative number that can be represented
largest positive value: 1000 0000 0000 0000
inverted (complement): 0111 1111 1111 1111
add 1: 1000 0000 0000 0000
And you get itself, since its absolute value is 1 larger than the large positive number and adding them together yields the expected zero value as the sign bit is carried out to the left, setting the carry flag).
Adding them together, they yield the expected 0 as the carries propagates out of the high order bit and sets the CPUs carry flag.
I seem to lack a fundemental understanding of calculating and using hex and byte values in C# (or programming in general).
I'd like to know how to calculate hex values and bytes (0x--) from sources such as strings and RGB colors (like how do I figure out what the 0x code is for R255 G0 B0 ?)
Why do we use things like FF, is it to compensate for the base 10 system to get a number like 10?
Hexadecimal is base 16, so instead of counting from 0 to 9, we count from 0 to F. And we generally prefix hex constants with 0x. Thus,
Hex Dec
-------------
0x00 = 0
0x09 = 9
0x0A = 10
0x0F = 15
0x10 = 16
0x200 = 512
A byte is the typical unit of storage for values on a computer, and on most all modern systems, a byte contains 8 bits. Note that bit actually means binary digit, so from this, we gather that a byte has a maximum value of 11111111 binary. That is 0xFF hex, or 255 decimal. Thus, one byte can be represented by a minimum of two hexadecimal characters. A typical 4-byte int is then 8 hex characters, like 0xDEADBEEF.
RGB values are typically packed with 3 byte values, in that order, RGB. Thus,
R=255 G=0 B=0 => R=0xFF G=0x00 B=0x00 => 0xFF0000 or #FF0000 (html)
R=66 G=0 B=248 => R=0x42 G=0x00 B=0xF8 => 0x4200F8 or #4200F8 (html)
For my hex calculations, I like to use python as my calculator:
>>> a = 0x427FB
>>> b = 700
>>> a + b
273079
>>>
>>> hex(a + b)
'0x42ab7'
>>>
>>> bin(a + b)
'0b1000010101010110111'
>>>
For the RGB example, I can demonstrate how we could use bit-shifting to easily calculate those values:
>>> R=66
>>> G=0
>>> B=248
>>>
>>> hex( R<<16 | G<<8 | B )
'0x4200f8'
>>>
Base-16 (also known as hex) notation is convenient because you can fit four bits in exactly one hex digit, making conversion to binary very easy, yet not requiring as much space as a full binary notation. This is useful when you need to represent bit-oriented data in a human-readable form.
Learning hex is easy - all you need to do is memorizing a short table of 16 rows defining hex-to-binary conversion:
0 - 0000
1 - 0001
2 - 0010
3 - 0011
4 - 0100
5 - 0101
6 - 0110
7 - 0111
8 - 1000
9 - 1001
A - 1010
B - 1011
C - 1100
D - 1101
E - 1110
F - 1111
With this table in hand, you can easily convert hex strings of arbitrary length to their corresponding bit patterns:
0x478FD105 - 01000111100011111011000100000101
Converting back is easy as well: group your binary digits by four, and use the table to make hex digits
0010 1001 0100 0101 0100 1111 0101 1100 - 0x29454F5C
In decimal, each digit is weighted 10 times more than the one to the right, for example the '3' in 32 is 3 * 10, and the '1' in 102 is 1 * 100. Binary is similar except since there are only two digits (0 and 1) each bit is only weighted twice as much as the one to the right. Hexadecimal uses 16 digits - the 10 decimal digits along with the letters A = 10 to F = 15.
An n-digit decimal number can represent values up to 10^n - 1 and similarly an n-digit binary number can represent values up to 2^n - 1.
Hexadecimal is convenient since you can express a single hex digit in 4 bits since 2^4 = 16 possible values can be represented in 4 bits.
You can convert binary to hex by grouping from the right 4 bits at a time and converting each group to the corresponding hex. For example 1011100 -> (101)(1100) -> 5C
The conversion from hex to binary is even simpler since you can simply expand each hex digit into the corresponding binary, for example 0xA4 -> 1010 0100
The answer to the actual question posted ("Why do we use things like FF, is it to compensate for the base 10 system to get a number like 10?") is this: Computer use bits, that means either 1 or 0.
The essence is similar to what Lee posted and called "positional notation". In a decimal number, each position in the number refers to a power of 10. For example, in the number 123, the last position represents 10^0 -- the ones. The middle position represents 10^1 -- the tens. And the first is 10^2 -- the hundreds. So the number "123" represents 1 * 100 + 2 * 10 + 3 * 1 = 123.
Numbers in binary use the same system. The number 10 (base 2) represents 1 * 2^1 + 0 * 2^0 = 2.
If you want to express the decimal number 10 in binary, you get the number 1010. That means, you need four bits to represent a single decimal digit.
But with four bits you can represent up to 16 different values, not just 10 different values. If you need four bits per digit, you might as well use numbers in the base 16 instead of only base 10. That's where hexadecimal comes into play.
Regarding how to convert ARGB values; as been written in other replies, converting between binary and hexadecimal is comparatively easy (4 binary digits = 1 hex digit).
Converting between decimal and hex is more involving and at least to me it's been easier (if i have to do it in my head) to first convert the decimal into binary representation, and then the binary number into hex. Google probably has tons of how-tos and algorithms for that.
I understand that the single ampersand operator is normally used for a 'bitwise AND' operation. However, can anyone help explain the interesting results you get when you use it for comparison between two numbers?
For example;
(6 & 2) = 2
(10 & 5) = 0
(20 & 25) = 16
(123 & 20) = 16
I'm not seeing any logical link between these results and I can only find information on comparing booleans or single bits.
Compare the binary representations of each of those.
110 & 010 = 010
1010 & 0101 = 0000
10100 & 11001 = 10000
1111011 & 0010100 = 0010000
In each case, a digit is 1 in the result only when it is 1 on both the left AND right side of the input.
You need to convert your numbers to binary representation and then you will see the link between results like 6 & 2= 2 is actually 110 & 010 =010 etc
10 & 5 is 1010 & 0101 = 0000
The binary and operation is performed on the integers, represented in binary. For example
110 (6)
010 (2)
--------
010 (2)
The bitwise AND is does exactly that: it does an AND operation on the Bits.
So to anticipate the result you need to look at the bits, not the numbers.
AND gives you 1, only if there's 1 in both number in the same position:
6(110) & 2(010) = 2(010)
10(1010) & 5(0101) = 0(0000)
A bitwise OR will give you 1 if there's 1 in either numbers in the same position:
6(110) | 2(010) = 6(110)
10(1010) | 5(0101) = 15(1111)
6 = 0110
2 = 0010
6 & 2 = 0010
20 = 10100
25 = 11001
20 & 25 = 10000
(looks like you're calculation is wrong for this one)
Etc...
Internally, Integers are stored in binary format. I strongly suggest you read about that. Knowing about the bitwise representation of numbers is very important.
That being said, the bitwise comparison compares the bits of the parameters:
Decimal: 6 & 2 = 2
Binary: 0110 & 0010 = 0010
Bitwize AND matches the bits in binary notation one by one and the result is the bits that are comon between the two numbers.
To convert a number to binary you need to understand the binary system.
For example
6 = 110 binary
The 110 represents 1x4 + 1x2 + 0x1 = 6.
2 then is
0x4 + 1x2 + 0x1 = 2.
Bitwize and only retains the positions where both numbers have the position set, in this case the bit for 2 and the result is then 2.
Every extra bit is double the last so a 4 bit number uses the multipliers 8, 4, 2, 1 and can there fore represent all numbers from 0 to 15 (the sum of the multipliers.)