This is actually fairly tricky to Google.
How do you SET (bitwise or) the top two bits of a 32 bit int?
I am getting compiler warnings from everything I try.
Try this:
integerVariable |= 3 << 30;
It may be more clear to use (1 << 31) | (1 << 30) instead of (3 << 30), or you could add a comment about the behavior. In any case, the compiler should be able to optimize the expression to a single value, which is equal to int.MinValue >> 1 == int.MinValue / 2.
If it's a uint:
uintVar |= 3u << 30;
integerVariable |= 0xC0000000;
Use 0xC0000000u for an unsigned integer variable.
Showing the entire 32-bit integer in hex notation is clearer to me than the bit shifts in Mehrdad's answer. They probably compile to the same thing, though, so use whichever looks clearer to you.
Related
I am trying to translate Java code with logical right shift (>>>) (Difference between >>> and >>) to C#
Java code is
return hash >>> 24 ^ hash & 0xFFFFFF;
C# is marked >>> as syntax error.
How to fix that?
Update 1
People recommend to use >> in C#, but it didn't solve problem.
System.out.println("hash 1 !!! = " + (-986417464>>>24));
is 197
but
Console.WriteLine("hash 1 !!! = " + (-986417464 >> 24));
is -59
Thank you!
Java needed to introduce >>> because its only unsigned type is char, whose operations are done in integers.
C#, on the other hand, has unsigned types, which perform right shift without sign extension:
uint h = (uint)hash;
return h >> 24 ^ h & 0xFFFFFF;
For C# you can just use >>
If the left-hand operand is of type uint or ulong, the right-shift operator performs a logical shift: the high-order empty bit positions are always set to zero.
From the docs.
Can someone explain what the << is doing in this function:
return (b & (1 << pos)) != 0;
And is there an equivalent to this in T-SQL?
It's bitwise shift.
Shift is not mentioned on Bitwise Operators (Transact-SQL) page so I would say they are not awailable in TSQL. However, bitwise shift in 2-based numeric system is equivalent to multiplying by 2, so you can use that to perform similar operation without actually using bitwise shift.
<< in C# means "shift number left". You can simulate it by multiplying by a corresponding power of two:
b & POWER(2, pos) <> 0
I was looking at the code I have currently in my project and found something like this:
public enum MyEnum
{
open = 1 << 00,
close = 1 << 01,
Maybe = 1 << 02,
........
}
The << operand is the shift operand, which shifts the first operand left by the number bits specified in the second operand.
But why would someone use this in an enum declaration?
This allows you to do something like this:
var myEnumValue = MyEnum.open | MyEnum.close;
without needing to count bit values of multiples of 2.
(like this):
public enum MyEnum
{
open = 1,
close = 2,
Maybe = 4,
........
}
This is usually used with bitfields, since it's clear what the pattern is, removes the need to manually calculate the correct values and hence reduces the chance of errors
[Flags]
public enum SomeBitField
{
open = 1 << 0 //1
closed = 1 << 1 //2
maybe = 1 << 2 //4
other = 1 << 3 //8
...
}
To avoid typing out the values for a Flags enum by hand.
public enum MyEnum
{
open = 0x01,
close = 0x02,
Maybe = 0x04,
........
}
This is to make an enum that you can combine.
What it effectively means is this:
public enum MyEnum
{
open = 1;
close = 2;
Maybe = 4;
//...
}
This is just a more bulletproof method of creating a [Flags] enum.
It's just meant to be a cleaner / more intuitive way of writing the bits. 1, 2, 3 is a more human-readable sequence than 0x1, 0x2, 0x4, etc.
Lots of answers here describing what this mechanic allows you to do, but not why
you would want to use it. Here's why.
Short version:
This notation helps when interacting with other components and communicating
with other engineers because it tells you explicitly what bit in a word is being
set or clear instead of obscuring that information inside a numeric value.
So I could call you up on the phone and say "Hey, what bit is for opening the
file?" And you'd say, "Bit 0". And I'd write in my code open = 1 << 0.
Because the number to the right of << tells you the bit number.
.
Long version:
Traditionally bits in a word are numbered from right to left, starting at zero.
So the least-significant bit is bit number 0 and you count up as you go toward
the most-significant bit. There are several benefits to labeling bits this
way.
One benefit is that you can talk about the same bit regardless of word size.
E.g., I could say that in both the 32-bit word 0x384A and 8-bit word 0x63, bits
6 and 1 are set. If you numbered your bits in the other direction, you couldn't
do that.
Another benefit is that a bit's value is simply 2 raised to the power of the bit
position. E.g., binary 0101 has bits 2 and 0 set. Bit 2 contributes the
value 4 (2^2) to the number, and bit 0 contributes the value 1 (2^0). So the
number's value is of course 4 + 1 = 5.
That long-winded background explanation brings us to the point: The << notation tells you the bit number just by looking at it.
The number 1 by itself in the statement 1 << n is simply a single bit set in
bit position 0. When you shift that number left, you're then moving that set
bit to a different position in the number. Conveniently, the amount you shift
tells you the bit number that will be set.
1 << 5: This means bit 5. The value is 0x20.
1 << 12: This means bit 12. The value is 0x40000.
1 << 17: This means bit 17. The value is 0x1000000.
1 << 54: This means bit 54. The value is 0x40000000000000.
(You can probably see that this notation might be helpful if
you're defining bits in a 64-bit number)
This notation really comes in handy when you're interacting with another
component, like mapping bits in a word to a hardware register. Like you might
have a device that turns on when you write to bit 7. So the hardware engineer
would write a data sheet that says bit 7 enables the device. And you'd write in
your code ENABLE = 1 << 7. Easy as that.
Oh shoot. The engineer just sent an errata to the datasheet saying that it was
supposed to be bit 15, not bit 7. That's OK, just change the code to
ENABLE = 1 << 15.
What if ENABLE were actually when both bits 7 and 1 were set at the same time?
ENABLE = (1 << 7) | (1 << 1).
It might look weird and obtuse at first, but you'll get used to it. And you'll
appreciate it if you ever explicitly need to know the bit number of something.
It is equal to powers of two.
public enum SomeEnum
{
Enum1 = 1 << 0, //1
Enum2 = 1 << 1, //2
Enum3 = 1 << 2, //4
Enum4 = 1 << 3 //8
}
And with such enum you will have function which looks like this:
void foo(unsigned ind flags)
{
for (int = 0; i < MAX_NUMS; i++)
if (1 << i & flags)
{
//do some stuff...
//parameter to that stuff probably is i either enum value
}
}
And call to that function would be foo(Enum2 | Enum3); and it will do something with all given enum values.
I have never used C# before and Im trying to translate a function to C and all was going well until I reached this weird line. Someone help?
out Int128 remainder;
remainder._lo |= 1; ???
assuming in C you have an Int128 struct of the same nature... in C it would be
remainder._lo |= 1;
which just says do a bitwise OR with 1
Some C compilers provide a 128bit ints you could use, in which case you'd end up just doing remainder |= 1;
This implies that
remainder._lo
is an integer of some type, and the |= operator is bitwise or.
So this is equivalent to
reminder._lo = reminder._lo | 1
That might be legal C depending on your context, but that should give you the key to it.
It's the equivalent of
remainder._lo = remainder._lo | 1;
where | is the bitwise or operator, but the |= shouldbe aupported in C as-is.
Int128 is presumably a structure with _hi and _lo members to store the high and low 64 bits of the 128-bit integer. This line is just doing a bit-wise or of the low 64 bits with 1, effectively switching on the least significant bit.
I was studying shift operators in C#, trying to find out
when to use them in my code.
I found an answer but for Java, you could:
a) Make faster integer multiplication and division operations:
*4839534 * 4* can be done like this:
4839534 << 2
or
543894 / 2 can be done like this: 543894 >> 1
Shift operations much more faster than multiplication for most of processors.
b) Reassembling byte streams to int values
c) For accelerating operations with graphics since Red, Green and Blue colors coded by separate bytes.
d) Packing small numbers into one single long...
For b, c and d I can't imagine here a real sample.
Does anyone know if we can accomplish all these items in C#?
Is there more practical use for shift operators in C#?
There is no need to use them for optimisation purposes because the compiler will take care of this for you.
Only use them when shifting bits is the real intent of your code (as in the remaining examples in your question). The rest of the time just use multiply and divide so readers of your code can understand it at a glance.
Unless there is a very compelling reason, my opinion is that using clever tricks like that typically just make for more confusing code with little added value. The compiler writers are a smart bunch of developers and know a lot more of those tricks than the average programmer does. For example, dividing an integer by a power of 2 is faster with the shift operator than a division, but it probably isn't necessary since the compiler will do that for you. You can see this by looking at the assembly that both the Microsoft C/C++ compiler and gcc perform these optimizations.
I will share an interesting use I've stumbled across in the past. This example is shamelessly copied from a supplemental answer to the question, "What does the [Flags] Enum Attribute mean in C#?"
[Flags]
public enum MyEnum
{
None = 0,
First = 1 << 0,
Second = 1 << 1,
Third = 1 << 2,
Fourth = 1 << 3
}
This can be easier to expand upon than writing literal 1, 2, 4, 8, ... values, especially once you get past 17 flags.
The tradeoff is, if you need more than 31 flags (1 << 30), you also need to be careful to specify your enum as something with a higher upper bound than a signed integer (by declaring it as public enum MyEnum : ulong, for example, which will give you up to 64 flags). This is because...
1 << 29 == 536870912
1 << 30 == 1073741824
1 << 31 == -2147483648
1 << 32 == 1
1 << 33 == 2
By contrast, if you set an enum value directly to 2147483648, the compiler will throw an error.
As pointed out by ClickRick, even if your enum derives from ulong, your bit shift operation has to be performed against a ulong or your enum values will still be broken.
[Flags]
public enum MyEnum : ulong
{
None = 0,
First = 1 << 0,
Second = 1 << 1,
Third = 1 << 2,
Fourth = 1 << 3,
// Compiler error:
// Constant value '-2147483648' cannot be converted to a 'ulong'
// (Note this wouldn't be thrown if MyEnum derived from long)
ThirtySecond = 1 << 31,
// so what you would have to do instead is...
ThirtySecond = 1UL << 31,
ThirtyThird = 1UL << 32,
ThirtyFourth = 1UL << 33
}
Check out these Wikipedia articles about the binary number system and the arithmetic shift. I think they will answer your questions.
The shift operators are rarely encountered in business applications today. They will appear frequently in low-level code that interacts with hardware or manipulates packed data. They were more common back in the days of 64k memory segments.