C# binary literals - c#

Is there a way to write binary literals in C#, like prefixing hexadecimal with 0x? 0b doesn't work.
If not, what is an easy way to do it? Some kind of string conversion?

Update
C# 7.0 now has binary literals, which is awesome.
[Flags]
enum Days
{
None = 0,
Sunday = 0b0000001,
Monday = 0b0000010, // 2
Tuesday = 0b0000100, // 4
Wednesday = 0b0001000, // 8
Thursday = 0b0010000, // 16
Friday = 0b0100000, // etc.
Saturday = 0b1000000,
Weekend = Saturday | Sunday,
Weekdays = Monday | Tuesday | Wednesday | Thursday | Friday
}
Original Post
Since the topic seems to have turned to declaring bit-based flag values in enums, I thought it would be worth pointing out a handy trick for this sort of thing. The left-shift operator (<<) will allow you to push a bit to a specific binary position. Combine that with the ability to declare enum values in terms of other values in the same class, and you have a very easy-to-read declarative syntax for bit flag enums.
[Flags]
enum Days
{
None = 0,
Sunday = 1,
Monday = 1 << 1, // 2
Tuesday = 1 << 2, // 4
Wednesday = 1 << 3, // 8
Thursday = 1 << 4, // 16
Friday = 1 << 5, // etc.
Saturday = 1 << 6,
Weekend = Saturday | Sunday,
Weekdays = Monday | Tuesday | Wednesday | Thursday | Friday
}

C# 7.0 supports binary literals (and optional digit separators via underscore characters).
An example:
int myValue = 0b0010_0110_0000_0011;
You can also find more information on the Roslyn GitHub page.

Only integer and hex directly, I'm afraid (ECMA 334v4):
9.4.4.2 Integer literals Integer literals are used to write values of
types int, uint, long, and ulong.
Integer literals have two possible
forms: decimal and hexadecimal.
To parse, you can use:
int i = Convert.ToInt32("01101101", 2);

Adding to #StriplingWarrior's answer about bit flags in enums, there's an easy convention you can use in hexadecimal for counting upwards through the bit shifts. Use the sequence 1-2-4-8, move one column to the left, and repeat.
[Flags]
enum Scenery
{
Trees = 0x001, // 000000000001
Grass = 0x002, // 000000000010
Flowers = 0x004, // 000000000100
Cactus = 0x008, // 000000001000
Birds = 0x010, // 000000010000
Bushes = 0x020, // 000000100000
Shrubs = 0x040, // 000001000000
Trails = 0x080, // 000010000000
Ferns = 0x100, // 000100000000
Rocks = 0x200, // 001000000000
Animals = 0x400, // 010000000000
Moss = 0x800, // 100000000000
}
Scan down starting with the right column and notice the pattern 1-2-4-8 (shift) 1-2-4-8 (shift) ...
To answer the original question, I second #Sahuagin's suggestion to use hexadecimal literals. If you're working with binary numbers often enough for this to be a concern, it's worth your while to get the hang of hexadecimal.
If you need to see binary numbers in source code, I suggest adding comments with binary literals like I have above.

You can always create quasi-literals, constants which contain the value you are after:
const int b001 = 1;
const int b010 = 2;
const int b011 = 3;
// etc ...
Debug.Assert((b001 | b010) == b011);
If you use them often then you can wrap them in a static class for re-use.
However, slightliy off-topic, if you have any semantics associated with the bits (known at compile time) I would suggest using an Enum instead:
enum Flags
{
First = 0,
Second = 1,
Third = 2,
SecondAndThird = 3
}
// later ...
Debug.Assert((Flags.Second | Flags.Third) == Flags.SecondAndThird);

If you look at the language feature implementation status of the .NET Compiler Platform ("Roslyn") you can clearly see that in C# 6.0 this is a planned feature, so in the next release we can do it in the usual way.

string sTable="static class BinaryTable\r\n{";
string stemp = "";
for (int i = 0; i < 256; i++)
{
stemp = System.Convert.ToString(i, 2);
while(stemp.Length<8) stemp = "0" + stemp;
sTable += "\tconst char nb" + stemp + "=" + i.ToString() + ";\r\n";
}
sTable += "}";
Clipboard.Clear();
Clipboard.SetText ( sTable);
MessageBox.Show(sTable);
Using this, for 8bit binary, I use this to make a static class and it puts it into the clipboard.. Then it gets pasted into the project and added to the Using section, so anything with nb001010 is taken out of a table, at least static, but still...
I use C# for a lot of PIC graphics coding and use 0b101010 a lot in Hi-Tech C
--sample from code outpt--
static class BinaryTable
{ const char nb00000000=0;
const char nb00000001=1;
const char nb00000010=2;
const char nb00000011=3;
const char nb00000100=4;
//etc, etc, etc, etc, etc, etc, etc,
}
:-)
NEAL

Binary literal feature was not implemented in C# 6.0 & Visual Studio 2015. but on 30-March 2016 Microsoft announced the new version of Visual Studio '15' Preview with that we can use binary literals.
We can use one or more than one Underscore( _ ) character for digit separators. so the code snippet would look something like:
int x = 0b10___10_0__________________00; //binary value of 80
int SeventyFive = 0B100_________1011; //binary value of 75
WriteLine($" {x} \n {SeventyFive}");
and we can use either of 0b and 0B as shown in the above code snippet.
if you do not want to use digit separator you can use it without digit separator like below code snippet
int x = 0b1010000; //binary value of 80
int SeventyFive = 0B1001011; //binary value of 75
WriteLine($" {x} \n {SeventyFive}");

While not possible using a Literal, maybe a BitConverter can also be a solution?

Though the string parsing solution is the most popular, I don't like it, because parsing string can be a great performance hit in some situations.
When there is needed a kind of a bitfield or binary mask, I'd rather write it like
long bitMask = 1011001;
And later
int bit5 = BitField.GetBit(bitMask, 5);
Or
bool flag5 = BitField.GetFlag(bitMask, 5);`
Where BitField class is
public static class BitField
{
public static int GetBit(int bitField, int index)
{
return (bitField / (int)Math.Pow(10, index)) % 10;
}
public static bool GetFlag(int bitField, int index)
{
return GetBit(bitField, index) == 1;
}
}

You can use 0b000001 since Visual Studio 2017 (C# 7.0)

Basically, I think the answer is NO, there is no easy way. Use decimal or hexadecimal constants - they are simple and clear. #RoyTinkers answer is also good - use a comment.
int someHexFlag = 0x010; // 000000010000
int someDecFlag = 8; // 000000001000
The others answers here present several useful work-a rounds, but I think they aren't better then the simple answer. C# language designers probably considered a '0b' prefix unnecessary. HEX is easy to convert to binary, and most programmers are going to have to know the DEC equivalents of 0-8 anyways.
Also, when examining values in the debugger, they will be displayed has HEX or DEC.

Related

How to convert bit to bit shift value

I have a difficulty selector set as an enum (None=0, Easy = 1<<0, Medium = 1<<1, Hard = 1<<2, Expert = 1<<3). Along with this, I have an array of point values I want to assign to these difficulties. So the array has indexes as so. [0, 100, 133, 166, 200].
The tricky bit, is this. I want to grab the index of the array, that is equivalent to the bit shift of the difficulty. So None = 0 (0000)-> Index = 0. Easy = 1 (0001)-> Index = 1. Medium = 2 (0010)-> Index = 2. Hard = 4 (0100) -> Index = 3. Expert = 8 (1000) -> Index = 4.
I tried doing the Square root originally, as I thought that it was powers of two, but quickly realized that it's actually not RAISED to two, it's just a base of two. So that would never work.
I also know I can get this value via a forloop, where I start at 8 (1000) for instance, and keep a counter as I shift right, and keep that going until it hits 0.
int difficulty = (int)LevelSetup.difficulty;
int difficultyIndex = 0;
while(difficulty != 0)
{
difficultyIndex++;
difficulty = difficulty >> 1;
}
currScorePerQuestion = ScorePerQuestion[difficultyIndex];
IE. Counter = 0; val = 8. | SHIFT | Counter = 1; val = 4; |SHIFT| Counter = 2; val = 2; |SHIFT| Counter = 3; val = 1; |SHIFT| Counter = 4; val = 0; |END| and we end with a value of 4.
The problem with this is that it seems really messy and overkill, especially if you wanted to go up to 64 bits and have lots of indicies. I just know that there is some kind of algorithm that I could use to convert these very simply. I am just struggling to come up with what that equation is exactly.
Any help is super appreciated, thanks!
After asking my friends. They came up with and gave me this solution.
As binary works by raising 2 to the nth power. We always have a base of 2, raised to the number that the bit is. So 2^4 is 8 which is the same as 1000 in binary.
Then, using the properties of Logarithms. You can use Log of base 2, which matches our base 2 powers, and take the log of it's value to get the exponential. ie Log2(2^3) = 3. And Log2(2^7) = 7.
luckily for us, binary matches this pattern completely, so a bit mask of (1000) is 8, which is equal to 2^3, so Log2(8) => 3.
To convert a bit into an index, ie (1000) is the 4th bit, so we want an index of 4.
Log base 2 of 8 -> Math.Log2(8) = 3. Then to get up to our 0 based index, we just add 1.
This leaves us with the following algorithm:
int difficulty = (int)LevelSetup.difficulty;
currScorePerQuestion = ScorePerQuestion[Math.Log2(difficulty)+1];

How to make subtraction in numeric strings while holding the string length fixed in C#?

8654 -> 8653; 1000 -> 0999; 0100 -> 0099; 0024 -> 0023; 0010 -> 0009; 0007 -> 0006 etc.
I have a string variable of fixed length 4; its characters are always numbers. I want to make subtraction while obeying the given rule that its length of 4 must be protected.
What I tried: Convert.ToInt32, .Length operations etc. In my code, I always faced some sort of errors.
I devised that I can do this via 3 steps:
1. Convert the string value to an (int) integer
2. Subtract "1" in that integer to find a new integer
3. Add "4 - length of new integer" times "0" to the beginning.
Anyway, independent of the plotted solution above (since I am a newbee; perhaps even my thought may divert a standard user from a normal plausible approach for solution), is not there a way to perform the above via a function or something else in C#?
A number doesn't have a format, it's string representation has a format.
The steps you outlined for performing the arithmetic and outputting the result are correct. I would suggest using PadLeft to output the result in the desired format:
int myInt = int.Parse("0100");
myInt = myInt - 1;
string output = myInt.ToString().PadLeft(4, '0');
//Will output 0099
Your steps are almost right, however there is a easier way to accomplish getting the leading 0's, use Numeric string formatting. Using the formatting string "D4" it will behave exactly like you want.
var oldString = "1000";
var oldNum = Convert.ToInt32(oldString);
var newNum = oldNum - 1;
var newString = newNum.ToString("D4");
Console.WriteLine(newString); //prints "0999"
Click to run example
You could also use the custom formatting string "0000".
Well, I think others have implemented what you have implemented already. The reason might be that you didn't post your code. But none of the answers addresses your main question...
Your approach is totally fine. To make it reusable, you need to put it into a method. A method can look like this:
private string SubtractOnePreserveLength(string originalNumber)
{
// 1. Convert the string value to an (int) integer
// 2. Subtract "1" in that integer to find a new integer
// 3. Add "4 - length of new integer" times "0" to the beginning.
return <result of step 3 >;
}
You can then use the method like this:
string result = SubtractOnePreserveLength("0100");
// result is 0099

binary number represented in a long variable in c#

In code I am having a problem I would like to see in the enumerator class a binary format. You know in c# we have the possibility to represent Hexadecimal with 0xFF (f.e.). I would like to know if we have something similar for binary number like:
public static class MyEnum
{
public static const long TR = 000000001;
public static const long TRP = 000000010;
...
}
This enumerator represent 1, 2, 4, 8, ... for any type I put inside. Just I need a binary number to see easy the number.
How can I represent binary in C#?
I dont think there is any such representation in C#
From the ECMA script
9.4.4.2 Integer literals Integer literals are used to write values of types int, uint, long, and ulong. Integer literals have two possible
forms: decimal and hexadecimal.
Also check .NET Compiler Platform ("Roslyn")
Probably C# 6.0 will add that feature
C# now tries to help us by introducing a binary literal. Let's start with what we currently have:
var num1 = 1234; //1234
var num2 = 0x1234; //4660
What could possible come now? Here's the answer:
var num3 = 0b1010; //10
Of course binary digits are becoming quite long very fast. This is why a nice separator has been introduced:
var num4 = 0b1100_1010; //202
There can be as many underscores as possible. The underscores can also be connected. And the best thing: Underscores do also work for normal numbers and hex literals:
var num5 = 1_234_567_890; //123456789
var num6 = 0xFF_FA_88_BC; //4294609084
var num7 = 0b10_01__01_10; //150
The only constraint of the underscore is, that of course a number cannot start with it.
Binary literals will make enumerations and bit vectors a little bit easier to understand and handle. It is just more close at what we have been thinking when creating such constructs.
If you plan to use only powers of two, the idiomatic way of doing it is by using shifts:
public static const long TR = 1L << 0;
public static const long TRP = 1L << 1;
The L suffix becomes necessary when you shift left by 32 or more positions.
Link: C# does not provide a syntax for binary literals.
You can't. Not yet anyway. It will be introduced in C# 6.
The syntax will be similar to hexadecimal:
int i = 0b1110000101;
Another great feature that complements this is that you can use underscores in numbers:
int i = 0b11_1000_0101;
I'm not sure if I would ever really do this, but here's an idea that just came to me:
const byte b0001 = 0x1;
const byte b0010 = 0x2;
const byte b0100 = 0x4;
const byte b1000 = 0x8;
const long Value1 = (// 0b0111
b0001 |
b0010 |
b0100);
const long Value2 = (// 0b01100111
b0010 |
b0100)
<< 4 | (
b0001 |
b0010 |
b0100);
Or you could do something this, which is the same idea but a bit more readable:
const byte b0110 = 0x6;
const byte b0111 = 0x7;
const long Value3 = b0110 << 4 | b0111;

Calculating the number of bits in a Subnet Mask in C#

I have a task to complete in C#. I have a Subnet Mask: 255.255.128.0.
I need to find the number of bits in the Subnet Mask, which would be, in this case, 17.
However, I need to be able to do this in C# WITHOUT the use of the System.Net library (the system I am programming in does not have access to this library).
It seems like the process should be something like:
1) Split the Subnet Mask into Octets.
2) Convert the Octets to be binary.
3) Count the number of Ones in each Octet.
4) Output the total number of found Ones.
However, my C# is pretty poor. Does anyone have the C# knowledge to help?
Bit counting algorithm taken from:
http://www.necessaryandsufficient.net/2009/04/optimising-bit-counting-using-iterative-data-driven-development/
string mask = "255.255.128.0";
int totalBits = 0;
foreach (string octet in mask.Split('.'))
{
byte octetByte = byte.Parse(octet);
while (octetByte != 0)
{
totalBits += octetByte & 1; // logical AND on the LSB
octetByte >>= 1; // do a bitwise shift to the right to create a new LSB
}
}
Console.WriteLine(totalBits);
The most simple algorithm from the article was used. If performance is critical, you might want to read the article and use a more optimized solution from it.
string ip = "255.255.128.0";
string a = "";
ip.Split('.').ToList().ForEach(x => a += Convert.ToInt32(x, 2).ToString());
int ones_found = a.Replace("0", "").Length;
A complete sample:
public int CountBit(string mask)
{
int ones=0;
Array.ForEach(mask.Split('.'),(s)=>Array.ForEach(Convert.ToString(int.Parse(s),2).Where(c=>c=='1').ToArray(),(k)=>ones++));
return ones
}
You can convert a number to binary like this:
string ip = "255.255.128.0";
string[] tokens = ip.Split('.');
string result = "";
foreach (string token in tokens)
{
int tokenNum = int.Parse(token);
string octet = Convert.ToString(tokenNum, 2);
while (octet.Length < 8)
octet = octet + '0';
result += octet;
}
int mask = result.LastIndexOf('1') + 1;
The solution is to use a binary operation like
foreach(string octet in ipAddress.Split('.'))
{
int oct = int.Parse(octet);
while(oct !=0)
{
total += oct & 1; // {1}
oct >>=1; //{2}
}
}
The trick is that on line {1} the binary AND is in sence a multiplication so multiplicating 1x0=0, 1x1=1. So if we have some hypothetic number
0000101001 and multiply it by 1 (so in binary world we execute &), which is nothig else then 0000000001, we get
0000101001
0000000001
Most right digit is 1 in both numbers so making binary AND return 1, otherwise if ANY of the numbers minor digit will be 0, the result will be 0.
So here, on line total += oct & 1 we add to tolal either 1 or 0, based on that digi number.
On line {2}, instead we just shift the minor bit to right by, actually, deviding the number by 2, untill it becomes 0.
Easy.
EDIT
This is valid for intgere and for byte types, but do not use this technique on floating point numbers. By the way, it's pretty valuable solution for this question.

What is the tilde (~) in the enum definition?

I'm always surprised that even after using C# for all this time now, I still manage to find things I didn't know about...
I've tried searching the internet for this, but using the "~" in a search isn't working for me so well and I didn't find anything on MSDN either (not to say it isn't there)
I saw this snippet of code recently, what does the tilde(~) mean?
/// <summary>
/// Enumerates the ways a customer may purchase goods.
/// </summary>
[Flags]
public enum PurchaseMethod
{
All = ~0,
None = 0,
Cash = 1,
Check = 2,
CreditCard = 4
}
I was a little surprised to see it so I tried to compile it, and it worked... but I still don't know what it means/does. Any help??
~ is the unary one's complement operator -- it flips the bits of its operand.
~0 = 0xFFFFFFFF = -1
in two's complement arithmetic, ~x == -x-1
the ~ operator can be found in pretty much any language that borrowed syntax from C, including Objective-C/C++/C#/Java/Javascript.
I'd think that:
[Flags]
public enum PurchaseMethod
{
None = 0,
Cash = 1,
Check = 2,
CreditCard = 4,
All = Cash | Check | CreditCard
}
Would be a bit more clear.
public enum PurchaseMethod
{
All = ~0, // all bits of All are 1. the ~ operator just inverts bits
None = 0,
Cash = 1,
Check = 2,
CreditCard = 4
}
Because of two complement in C#, ~0 == -1, the number where all bits are 1 in the binary representation.
Its better than the
All = Cash | Check | CreditCard
solution, because if you add another method later, say:
PayPal = 8 ,
you will be already done with the tilde-All, but have to change the all-line with the other. So its less error-prone later.
regards
Just a side note, when you use
All = Cash | Check | CreditCard
you have the added benefit that Cash | Check | CreditCard would evaluate to All and not to another value (-1) that is not equal to all while containing all values.
For example, if you use three check boxes in the UI
[] Cash
[] Check
[] CreditCard
and sum their values, and the user selects them all, you would see All in the resulting enum.
For others who found this question illuminating, I have a quick ~ example to share. The following snippet from the implementation of a paint method, as detailed in this Mono documentation, uses ~ to great effect:
PaintCells (clipBounds,
DataGridViewPaintParts.All & ~DataGridViewPaintParts.SelectionBackground);
Without the ~ operator, the code would probably look something like this:
PaintCells (clipBounds, DataGridViewPaintParts.Background
| DataGridViewPaintParts.Border
| DataGridViewPaintParts.ContentBackground
| DataGridViewPaintParts.ContentForeground
| DataGridViewPaintParts.ErrorIcon
| DataGridViewPaintParts.Focus);
... because the enumeration looks like this:
public enum DataGridViewPaintParts
{
None = 0,
Background = 1,
Border = 2,
ContentBackground = 4,
ContentForeground = 8,
ErrorIcon = 16,
Focus = 32,
SelectionBackground = 64,
All = 127 // which is equal to Background | Border | ... | Focus
}
Notice this enum's similarity to Sean Bright's answer?
I think the most important take away for me is that ~ is the same operator in an enum as it is in a normal line of code.
It's a complement operator,
Here is an article i often refer to for bitwise operators
http://www.blackwasp.co.uk/CSharpLogicalBitwiseOps.aspx
Also msdn uses it in their enums article which demonstrates it use better
http://msdn.microsoft.com/en-us/library/cc138362.aspx
The alternative I personally use, which does the same thing than #Sean Bright's answer but looks better to me, is this one:
[Flags]
public enum PurchaseMethod
{
None = 0,
Cash = 1,
Check = 2,
CreditCard = 4,
PayPal = 8,
BitCoin = 16,
All = Cash + Check + CreditCard + PayPal + BitCoin
}
Notice how the binary nature of those numbers, which are all powers of two, makes the following assertion true: (a + b + c) == (a | b | c). And IMHO, + looks better.
I have done some experimenting with the ~ and find it that it could have pitfalls. Consider this snippet for LINQPad which shows that the All enum value does not behave as expected when all values are ored together.
void Main()
{
StatusFilterEnum x = StatusFilterEnum.Standard | StatusFilterEnum.Saved;
bool isAll = (x & StatusFilterEnum.All) == StatusFilterEnum.All;
//isAll is false but the naive user would expect true
isAll.Dump();
}
[Flags]
public enum StatusFilterEnum {
Standard =0,
Saved =1,
All = ~0
}
Each bit in [Flags] enum means something enabled (1) or disabled (0).
~ operator is used to invert all the bits of the number. Example: 00001001b turns into 11110110b.
So ~0 is used to create the value where all bits are enabled, like 11111111b for 8-bit enum.
Just want to add that for this type of enums it may be more convenient to use bitwise left shift operator, like this:
[Flags]
enum SampleEnum
{
None = 0, // 0000b
First = 1 << 0, // 0001b
Second = 1 << 1, // 0010b
Third = 1 << 2, // 0100b
Fourth = 1 << 3, // 1000b
All = ~0 // 1111b
}

Categories

Resources