How this enum is assigned? What are all the value for each?
public enum SiteRoles
{
User = 1 << 0,
Admin = 1 << 1,
Helpdesk = 1 << 2
}
What is the use of assigning like this?
Used in this post
They're making a bit flag. Instead of writing the values as 1, 2, 4, 8, 16, etc., they left shift the 1 value to multiply it by 2. One could argue that it's easier to read.
It allows bitwise operations on the enum value.
1 << 0 = 1 (binary 0001)
1 << 1 = 2 (binary 0010)
1 << 2 = 4 (binary 0100)
Related
I'm consuming a web API and I need to pass in an int value that corresponds to a bit flag. How do I calculate the int values to pass in? For instance, if I want Option B, Option E, and Option F - what would the corresponding int value be?
Also please give a few more examples, like if I only want Option G. Or if I want D and E.
[Flags] public enum Includes
{
OptionA = 1 << 0,
OptionB = 1 << 1,
OptionC = 1 << 2,
OptionD = 1 << 3,
OptionE = 1 << 4,
OptionF = 1 << 5,
OptionG = 1 << 6,
OptionH = 1 << 7
}
int includes = ????
How do I calculate the int values to pass in?
By using bitwise OR, in that same way that this works:
int seven = 1|2|4;
Because in binary 1 is 0001, 2 is 0010 and 4 is 0100 when OR'd together they become 0111 (7)
Option B, Option E, and Option F
int bef = (int)(Includes.OptionB | Includes.OptionE | Includes.OptionF);
You can imagine the pattern you need to use for others. It doesn't matter what order you OR them in
For decoding a number we use a similar trick with &:
if(bef & Includes.OptionB == Includes.OptionB)
There is a helper method Enum.HasFlag you can use too
Got a negative number (-2147483392)
I don't understand why It (correctly) casts to a flags enum.
Given
[Flags]
public enum ReasonEnum
{
REASON1 = 1 << 0,
REASON2 = 1 << 1,
REASON3 = 1 << 2,
//etc more flags
//But the ones that matter for this are
REASON9 = 1 << 8,
REASON17 = 1 << 31
}
why does the following correctly report REASON9 and REASON17 based off a negative number?
var reason = -2147483392;
ReasonEnum strReason = (ReasonEnum)reason;
Console.WriteLine(strReason);
.NET Fiddle here
I say correctly, as this was an event reason property being fired from a COM component, and when cast as the enum value, it was correct in the values it casts to (as per that event). The flags enum was as per the SDK documentation for the COM object. The COM object is third party and I have no control over the number, based off the interface it will always be supplied as an INT
The topmost bit set (31-th in your case of Int32) means negative number (see two's complement for details):
int reason = -2147483392;
string bits = Convert.ToString(reason, 2).PadLeft(32, '0');
Console.Write(bits);
Outcome:
10000000000000000000000100000000
^ ^
| 8-th
31-th
And so you have
-2147483392 == (1 << 31) | (1 << 8) == REASON17 | REASON9
Say I have the following enums which represents a combination of properties that can be assigned to an object.
public enum ObjectFlags
{
None = 0,
Red = 1 << 0, // 1
Square = 1 << 1, // 2
Small = 1 << 2, // 4
Fast = 1 << 3, // 8
}
I store any selected property in a SQL database field called ObjectFlag:
Say I passed in Red and Small enums to the SaveOjectFlags method and
storethe integer combination using bitwise in a field .
public ActionResult SaveOjectFlags(List<string> myflags)
{
ObjectFlags = myFlags;
foreach (var flag in ObjectFlags)
{
mydatabaseTable.ObjectFlag += (int) Enum.Parse(typeof (ObjectFlags), flag);
_dbRepo.Save();
}
}
Now I wish to get the values from my ObjectFlags field in my database and get the enums as a list of strings:
I have tried doing the following which didn’t quite work:
var test = (ObjectFlags)mydatabaseTable.ObjectFlag;
The values of the variable test above doesn't = a list of strings from the enum
What am I doing wrong here?
See here.
The solution is:
[Flags]
public enum ObjectFlags
{
None = 0,
Red = 1 << 0, // 1
Square = 1 << 1, // 2
Small = 1 << 2, // 4
Fast = 1 << 3, // 8
}
All I needed to do was add the [Flags] attribute as Jon Skeet suggested
I understand that:
int bit = (number >> 3) & 1;
Will give me the bit 3 places from the left, so lets say 8 is 1000 so that would be 0001.
What I don't understand is how "& 1" will remove everything but the last bit to display an output of simply "1". I know that this works, I know how to get a bit from an int but how is it the code is extracting the single bit?
Code...
int number = 8;
int bit = (number >> 3) & 1;
Console.WriteLine(bit);
Unless my boolean algebra from school fails me, what's happening should be equivalent to the following:
*
1100110101101 // last bit is 1
& 0000000000001 // & 1
= 0000000000001 // = 1
*
1100110101100 // last bit is 0
& 0000000000001 // & 1
= 0000000000000 // = 0
So when you do & 1, what you're basically doing is to zero out all other bits except for the last one which will remain whatever it was. Or more technically speaking you do a bitwise AND operation between two numbers, where one of them happens to be a 1 with all leading bits set to 0
8 = 00001000
8 >> 1 = 00000100
8 >> 2 = 00000010
8 >> 3 = 00000001
If you use mask 1 = 000000001 then you have:
8 >> 3 = 000000001
1 = 000000001
(8 >> 3) & 1 = 000000001
Actually this is not hard to understand.
the "& 1" operation is just set all bits of the value to the "0", except the bit, which placed in the same position as the valuable bit in the value "1"
previous operation just shifts the all bits to the right. and places the checked bit to the position which won't be setted to "0" after operation "& 1"
fo example
number is 1011101
number >> 3 makes it 0001011
but (number >> 3) & 1 makes it 0000001
When u right shift 8 you get 0001
0001 & 0001 = 0001 which converted to int gives you 1.
So, when a value 0001 has been assigned to an int, it will print 1 and not 0001 or 0000 0001. All the leading zeroes will be discarded.
There's a statement a co-worker of mine wrote which I don't completely understand. Unfortunately he's not available right now, so here it is (with modified names, we're working on a game in Unity).
private readonly int FRUIT_LAYERS =
(1 << LayerMask.NameToLayer("Apple"))
| (1 << LayerMask.NameToLayer("Banana"));
NameToLayer takes a string and returns an integer. I've always seen left shift operators used with the constant integer on the right side, not the left, and all the examples I'm finding via Google follow that approach. In this case, I think he's pushing Apple and Banana onto the same relative layer (which I'll use later for filtering). In the future there would be more "fruits" to filter by. Any brilliant stackoverflowers who can give me an explanation of what's happening on those lines?
1 << x is essentially saying "give me a number where the (x+1)-th bit is one and the rest of the numbers are all zero.
x | y is a bitwise OR, so it will go through each bit from 1 to n and if that bit is one in either x or y then that bit will be one in the result, if not it will be zero.
So if LayerMask.NameToLayer("Apple") returns 2 and LayerMask.NameToLayer("Banana") returns 3 then FRUIT_LAYERS will be a number with the 3rd and 4th bits set, which is 1100 in binary, or 12 in base 10.
Your coworker is essentially using an int in place of a bool[32] to try to save on space. The block of code you show is analogous to
bool[] FRUIT_LAYERS = new bool[32];
FRUIT_LAYERS[LayerMask.NameToLayer("Apple")] = true;
FRUIT_LAYERS[LayerMask.NameToLayer("Banana")] = true;
You might want to consider a pattern more like this:
[Flags]
enum FruitLayers : int
{
Apple = 1 << 0,
Banana = 1 << 1,
Kiwi = 1 << 2,
...
}
private readonly FruitLayers FRUIT_LAYERS = FruitLayers.Apple | FruitLayers.Banana;
The code is shifting the binary value 1 to the left, the number of binary places to shift is determined by the Apple and Banana, after both values are shifted the are ORed in a binary way
Example:
Assume apple returns 2 and banana returns 3 you get:
1 << 2 which is 0100 (that means 4 in decimal)
1 << 3 which is 1000 ( that means eight in decimal)
now 0100 bitwise or with 1000 is 1100 which means 12
1 << n is basically an equivalent to 2n.