How to generate bit-shift equations? - c#

Preferably for this to be done in C#.
Supposedly, I have an integer of 1024.
I will be able to generate these equations:
4096 >> 2 = 1024
65536 >> 6 = 1024
64 << 4 = 1024
and so on...
Any clues or tips or guides or ideas?
Edit: Ok, in simple terms, what I want is, for example...Hey, I'm giving you an integer of 1024, now give me a list of possible bit-shift equations that will always return the value of 1024.
Ok, scratch that. It seems my question wasn't very concise and clear. I'll try again.
What I want, is to generate a list of possible bit-shift equations based on a numerical value. For example, if I have a value of 1024, how would I generate a list of possible equations that would always return the value of 1024?
Sample Equations:
4096 >> 2 = 1024
65536 >> 6 = 1024
64 << 4 = 1024
In a similar way, if I asked you to give me some additional equations that would give me 5, you would response:
3 + 2 = 5
10 - 5 = 5
4 + 1 = 5
Am I still too vague? I apologize for that.

You may reverse each equation and thus "generate" possible equations:
1024 >> 4 == 64
and therefore
64 << 4 == 1024
Thus generate all right/left shifts of 1024 without loosing bits due to overflow or underflow of your variable and then invert the corresponding equation.

Just add an extra '>' or '<':
uint value1= 4096 >> 2;
uint value2 = 65536 >> 6;
uint value3 = 64 << 4;
http://www.blackwasp.co.uk/CSharpShiftOperators.aspx

Are you asking why these relationships exist? Shifting bits left by 1 bit is equivalent to multiplying by 2. So 512 << 1 = 512 * 2 = 1024. Shifting right by 1 is dividing by 2. Shifting by 2 is multiplying/dividing by 4, by n is 2^n. So 1 << 10 = 1 * 2^10 = 1024. To see why, write the number out in binary: let's take 7 as an example:
7 -> 0000 0111b
7 << 1 -> 0000 1110b = 14
7 << 3 -> 0011 1000b = 56
If you already knew all this, I apologize, but you might want to make the question less vague.

Related

How do I inverse this bitwise operation?

I have the below code and I can't understand why the last line doesn't return 77594624. Can anyone help me write the inverse bitwise operation to go from 77594624 to 4 and back to 77594624?
Console.WriteLine(77594624);
Console.WriteLine((77594624 >> 24) & 0x1F);
Console.WriteLine((4 & 0x1F) << 24);
When you bit shift a value you might "lose" bits during that operation. If you right shift the value 16, which is 0b10000 in binary, by 4, you will get 1.
0b10000 = 16
0b00001 = 1
But this is also the case for other numbers like 28, which is 0b11100.
0b11100 = 28 = 16 + 8 + 4
0b00001 = 1
So with the start point 1 you cannot go "back" to the original number by left shifting again, as there is not enough information/bits available, you don't know if you want to go back to 16 or 28.
77594624 looks like this in binary, and the x's mark the part that is extracted by the right shift and bitwise AND:
000001000101000000000000000000000
xxxxx
Clearly some information is lost.
If the other parts of the number were available as well, then they could be reassembled.

C# - Bits and Bytes

I try to store some information in two bytes (byte[2]).
In the first four bit of the first byte I want to store a "type-information" encoded as a value from 0-9. And in the last four bit + the second byte I want to store a size-info, so the maximum of the size-info is 4095 (0xFFF);
Lets do some examples to explain what I mean.
When type-info is 5 and the size is 963 than the result should look like: 35-C3 as hex string.
35-C3 => the 5 is the type-info and the 3C3 is the 963.
03-00 => type-info 3 and size 0.
13-00 => type-info 3 and size 1.
But I have no idea how to do this with C# and need some community help:
byte type = 5; // hex 5
short size = 963; // hex 3C3
byte[] bytes = ???
string result = BitConverter.ToString(bytes);
// here result should by 35-C3
It should look like this:
bytes = new byte[2];
bytes[0] = type << 4 | size >> 8;
bytes[1] = size & 0xff;
Note: initially my numbers were wrong, I had written type << 8 | size >> 16 while it should have been type << 4 | size >> 8 as Aleksey showed in his answer.
Comments moved into the answer for posterity:
By shifting your type bits to the left by 4 before storing them in bytes[0] you ensure that they occupy the top 4 bits of bytes[0]. By shifting your size bits to the right by 8 you ensure that the low 8 bits of size are dropped out, and only the top 4 bits remain, and these top 4 bits are going to be stored into the low 4 bits of bytes[0]. It helps to draw a diagram:
bytes[0] bytes[1]
+------------------------+ +------------------------+
| 7 6 5 4 3 2 1 0 | | 7 6 5 4 3 2 1 0 |
+------------------------+ +------------------------+
type << 4
+-----------+
| 3 2 1 0| <-- type
+-----------+
+------------+ +------------------------+
|11 10 9 8 | | 7 6 5 4 3 2 1 0 | <-- size
+------------+ +------------------------+
size >> 8 size & 0xff
size is a 12-bit quantity. The bits are in positions 11 though 0. By shifting it right by 8 you are dropping the rightmost 8 bits and you are left with the top 4 bits only, at positions 3-0. These 4 bits are then stored in the low 4 bits of bytes[0].
Try this:
byte[] bytes = new byte[2];
bytes[0] = (byte) (type << 4 | size >> 8);
bytes[1] = (byte) (size & 0xff);
Micro Optimisations of the Memory Profile for types is not something you usually should bother with in .NET. If you were running native C++ I could understand this to some degree, but I would still advise against doing that. It is a lot of work with limited benefits at best.
But in .NET you just make a class or struct with a Enumeration (Int8) "Type" and a Int16 SizeInfo, say "good enough" and call it a day. Spend your resource on something better then shaving 1 byte worth of memory off it, when 64 bit is the native Integer Size of most Computers nowadays.
BitArray is about the closest you can get to actually defining specific Bits from a byte in .NET. And it features some info on similar types.
If you want to do the math the hard way, Modulo is a good place to start.

Why would someone use the << operator in an enum declaration?

I was looking at the code I have currently in my project and found something like this:
public enum MyEnum
{
open = 1 << 00,
close = 1 << 01,
Maybe = 1 << 02,
........
}
The << operand is the shift operand, which shifts the first operand left by the number bits specified in the second operand.
But why would someone use this in an enum declaration?
This allows you to do something like this:
var myEnumValue = MyEnum.open | MyEnum.close;
without needing to count bit values of multiples of 2.
(like this):
public enum MyEnum
{
open = 1,
close = 2,
Maybe = 4,
........
}
This is usually used with bitfields, since it's clear what the pattern is, removes the need to manually calculate the correct values and hence reduces the chance of errors
[Flags]
public enum SomeBitField
{
open = 1 << 0 //1
closed = 1 << 1 //2
maybe = 1 << 2 //4
other = 1 << 3 //8
...
}
To avoid typing out the values for a Flags enum by hand.
public enum MyEnum
{
open = 0x01,
close = 0x02,
Maybe = 0x04,
........
}
This is to make an enum that you can combine.
What it effectively means is this:
public enum MyEnum
{
open = 1;
close = 2;
Maybe = 4;
//...
}
This is just a more bulletproof method of creating a [Flags] enum.
It's just meant to be a cleaner / more intuitive way of writing the bits. 1, 2, 3 is a more human-readable sequence than 0x1, 0x2, 0x4, etc.
Lots of answers here describing what this mechanic allows you to do, but not why
you would want to use it. Here's why.
Short version:
This notation helps when interacting with other components and communicating
with other engineers because it tells you explicitly what bit in a word is being
set or clear instead of obscuring that information inside a numeric value.
So I could call you up on the phone and say "Hey, what bit is for opening the
file?" And you'd say, "Bit 0". And I'd write in my code open = 1 << 0.
Because the number to the right of << tells you the bit number.
.
Long version:
Traditionally bits in a word are numbered from right to left, starting at zero.
So the least-significant bit is bit number 0 and you count up as you go toward
the most-significant bit. There are several benefits to labeling bits this
way.
One benefit is that you can talk about the same bit regardless of word size.
E.g., I could say that in both the 32-bit word 0x384A and 8-bit word 0x63, bits
6 and 1 are set. If you numbered your bits in the other direction, you couldn't
do that.
Another benefit is that a bit's value is simply 2 raised to the power of the bit
position. E.g., binary 0101 has bits 2 and 0 set. Bit 2 contributes the
value 4 (2^2) to the number, and bit 0 contributes the value 1 (2^0). So the
number's value is of course 4 + 1 = 5.
That long-winded background explanation brings us to the point: The << notation tells you the bit number just by looking at it.
The number 1 by itself in the statement 1 << n is simply a single bit set in
bit position 0. When you shift that number left, you're then moving that set
bit to a different position in the number. Conveniently, the amount you shift
tells you the bit number that will be set.
1 << 5: This means bit 5. The value is 0x20.
1 << 12: This means bit 12. The value is 0x40000.
1 << 17: This means bit 17. The value is 0x1000000.
1 << 54: This means bit 54. The value is 0x40000000000000.
(You can probably see that this notation might be helpful if
you're defining bits in a 64-bit number)
This notation really comes in handy when you're interacting with another
component, like mapping bits in a word to a hardware register. Like you might
have a device that turns on when you write to bit 7. So the hardware engineer
would write a data sheet that says bit 7 enables the device. And you'd write in
your code ENABLE = 1 << 7. Easy as that.
Oh shoot. The engineer just sent an errata to the datasheet saying that it was
supposed to be bit 15, not bit 7. That's OK, just change the code to
ENABLE = 1 << 15.
What if ENABLE were actually when both bits 7 and 1 were set at the same time?
ENABLE = (1 << 7) | (1 << 1).
It might look weird and obtuse at first, but you'll get used to it. And you'll
appreciate it if you ever explicitly need to know the bit number of something.
It is equal to powers of two.
public enum SomeEnum
{
Enum1 = 1 << 0, //1
Enum2 = 1 << 1, //2
Enum3 = 1 << 2, //4
Enum4 = 1 << 3 //8
}
And with such enum you will have function which looks like this:
void foo(unsigned ind flags)
{
for (int = 0; i < MAX_NUMS; i++)
if (1 << i & flags)
{
//do some stuff...
//parameter to that stuff probably is i either enum value
}
}
And call to that function would be foo(Enum2 | Enum3); and it will do something with all given enum values.

Understanding the behavior of a single ampersand operator (&) on integers

I understand that the single ampersand operator is normally used for a 'bitwise AND' operation. However, can anyone help explain the interesting results you get when you use it for comparison between two numbers?
For example;
(6 & 2) = 2
(10 & 5) = 0
(20 & 25) = 16
(123 & 20) = 16
I'm not seeing any logical link between these results and I can only find information on comparing booleans or single bits.
Compare the binary representations of each of those.
110 & 010 = 010
1010 & 0101 = 0000
10100 & 11001 = 10000
1111011 & 0010100 = 0010000
In each case, a digit is 1 in the result only when it is 1 on both the left AND right side of the input.
You need to convert your numbers to binary representation and then you will see the link between results like 6 & 2= 2 is actually 110 & 010 =010 etc
10 & 5 is 1010 & 0101 = 0000
The binary and operation is performed on the integers, represented in binary. For example
110 (6)
010 (2)
--------
010 (2)
The bitwise AND is does exactly that: it does an AND operation on the Bits.
So to anticipate the result you need to look at the bits, not the numbers.
AND gives you 1, only if there's 1 in both number in the same position:
6(110) & 2(010) = 2(010)
10(1010) & 5(0101) = 0(0000)
A bitwise OR will give you 1 if there's 1 in either numbers in the same position:
6(110) | 2(010) = 6(110)
10(1010) | 5(0101) = 15(1111)
6 = 0110
2 = 0010
6 & 2 = 0010
20 = 10100
25 = 11001
20 & 25 = 10000
(looks like you're calculation is wrong for this one)
Etc...
Internally, Integers are stored in binary format. I strongly suggest you read about that. Knowing about the bitwise representation of numbers is very important.
That being said, the bitwise comparison compares the bits of the parameters:
Decimal: 6 & 2 = 2
Binary: 0110 & 0010 = 0010
Bitwize AND matches the bits in binary notation one by one and the result is the bits that are comon between the two numbers.
To convert a number to binary you need to understand the binary system.
For example
6 = 110 binary
The 110 represents 1x4 + 1x2 + 0x1 = 6.
2 then is
0x4 + 1x2 + 0x1 = 2.
Bitwize and only retains the positions where both numbers have the position set, in this case the bit for 2 and the result is then 2.
Every extra bit is double the last so a 4 bit number uses the multipliers 8, 4, 2, 1 and can there fore represent all numbers from 0 to 15 (the sum of the multipliers.)

Why AND two numbers to get a Boolean?

I am working on a little Hardware interface project based on the Velleman k8055 board.
The example code comes in VB.Net and I'm rewriting this into C#, mostly to have a chance to step through the code and make sense of it all.
One thing has me baffled though:
At one stage they read all digital inputs and then set a checkbox based on the answer to the read digital inputs (which come back in an Integer) and then they AND this with a number:
i = ReadAllDigital
cbi(1).Checked = (i And 1)
cbi(2).Checked = (i And 2) \ 2
cbi(3).Checked = (i And 4) \ 4
cbi(4).Checked = (i And 8) \ 8
cbi(5).Checked = (i And 16) \ 16
I have not done Digital systems in a while and I understand what they are trying to do but what effect would it have to AND two numbers? Doesn't everything above 0 equate to true?
How would you translate this to C#?
This is doing a bitwise AND, not a logical AND.
Each of those basically determines whether a single bit in i is set, for instance:
5 AND 4 = 4
5 AND 2 = 0
5 AND 1 = 1
(Because 5 = binary 101, and 4, 2 and 1 are the decimal values of binary 100, 010 and 001 respectively.)
I think you 'll have to translate it to this:
i & 1 == 1
i & 2 == 2
i & 4 == 4
etc...
This is using the bitwise AND operator.
When you use the bitwise AND operator, this operator will compare the binary representation of the two given values, and return a binary value where only those bits are set, that are also set in the two operands.
For instance, when you do this:
2 & 2
It will do this:
0010 & 0010
And this will result in:
0010
0010
&----
0010
Then if you compare this result with 2 (0010), it will ofcourse return true.
Just to add:
It's called bitmasking
http://en.wikipedia.org/wiki/Mask_(computing)
A boolean only require 1 bit. In the implementation most programming language, a boolean takes more than a single bit. In PC this won't be a big waste, but embedded system usually have very limited memory space, so the waste is really significant. To save space, the booleans are packed together, this way a boolean variable only takes up 1 bit.
You can think of it as doing something like an array indexing operation, with a byte (= 8 bits) becoming like an array of 8 boolean variables, so maybe that's your answer: use an array of booleans.
Think of this in binary e.g.
10101010
AND
00000010
yields 00000010
i.e. not zero. Now if the first value was
10101000
you'd get
00000000
i.e. zero.
Note the further division to reduce everything to 1 or 0.
(i and 16) / 16 extracts the value (1 or 0) of the 5th bit.
1xxxx and 16 = 16 / 16 = 1
0xxxx and 16 = 0 / 16 = 0
And operator performs "...bitwise conjunction on two numeric expressions", which maps to '|' in C#. The '` is an integer division, and equivalent in C# is /, provided that both operands are integer types.
The constant numbers are masks (think of them in binary). So what the code does is apply the bitwise AND operator on the byte and the mask and divide by the number, in order to get the bit.
For example:
xxxxxxxx & 00000100 = 00000x000
if x == 1
00000x00 / 00000100 = 000000001
else if x == 0
00000x00 / 00000100 = 000000000
In C# use the BitArray class to directly index individual bits.
To set an individual bit i is straightforward:
b |= 1 << i;
To reset an individual bit i is a little more awkward:
b &= ~(1 << i);
Be aware that both the bitwise operators and the shift operators tend to promote everything to int which may unexpectedly require casting.
As said this is a bitwise AND, not a logical AND. I do see that this has been said quite a few times before me, but IMO the explanations are not so easy to understand.
I like to think of it like this:
Write up the binary numbers under each other (here I'm doing 5 and 1):
101
001
Now we need to turn this into a binary number, where all the 1's from the 1st number, that is also in the second one gets transfered, that is - in this case:
001
In this case we see it gives the same number as the 2nd number, in which this operation (in VB) returns true. Let's look at the other examples (using 5 as i):
(5 and 2)
101
010
----
000
(false)
(5 and 4)
101
100
---
100
(true)
(5 and 8)
0101
1000
----
0000
(false)
(5 and 16)
00101
10000
-----
00000
(false)
EDIT: and obviously I miss the entire point of the question - here's the translation to C#:
cbi[1].Checked = i & 1 == 1;
cbi[2].Checked = i & 2 == 2;
cbi[3].Checked = i & 4 == 4;
cbi[4].Checked = i & 8 == 8;
cbi[5].Checked = i & 16 == 16;
I prefer to use hexadecimal notation when bit twiddling (e.g. 0x10 instead of 16). It makes more sense as you increase your bit depths as 0x20000 is better than 131072.

Categories

Resources