Binary notation for writing bits - C# - c#

There are some notations to write numbers in C# that tell if what you wrote is float, double, integer and so on.
So I would like to write a binary number, how do I do that?
Say I have a byte:
byte Number = 10011000 //(8 bits)
How should I write it without having the trouble to know that 10011000 in binary = 152 in decimal?
P.S.: Parsing a string is completely out of question (I need performance)

as of c# 6 c# 7 you can use 0b prefix to get binary similar to the 0x for hex
int x = 0b1010000; //binary value of 80
int seventyFive = 0b1001011; //binary value of 75
give it a shot

You can write this:
int binaryNotation = 0b_1001_1000;
In C# 7.0 and later, you can use the underscore '_' as a digit seperator including decimal, binary, or hexadecimal notation, to improve legibility.

There's no way to do it other than parsing a string, I'm afraid:
byte number = (byte) Convert.ToInt32("10011000", 2);
Unfortunately you will be unable to assign constant values like that, of course.
If you find yourself doing that a lot, I guess you could write an extension method on string to make things more readable:
public static class StringExt
{
public static byte AsByte(this string self)
{
return (byte)Convert.ToInt32(self, 2);
}
}
Then the code would look like this:
byte number = "10011000".AsByte();
I'm not sure that would be a good idea though...
Personally, I just use hex initializers, e.g.
byte number = 0x98;

Related

representing a hexadecimal value by converting it to char

so I am outputting the char 0x11a1 by converting it to char
than I multiply 0x11a1 by itself and output it again but I do not get what I expect to get as
by doing this {int hgvvv = chch0;} and outputting to the console I can see that the computer thinks that 0x11a1 * 0x11a1 equals 51009 but it actually equals 20367169
As a result I do not gat what I want.
Could you please explain to me why?
char chch0 = (char)0x11a1;
Console.WriteLine(chch0);
chch0 = (char)(chch0 * chch0);
Console.WriteLine(chch0);
int hgvvv = chch0;
Console.WriteLine(hgvvv);
We know that 1 bytes is 8 bits.
We know that a char in c# is 2 bytes, which would be 16 bits.
If we multiply 0x11a1 X 0x11a1 we get 0x136c741.
0x136c741 in binary is 0001001101101100011101000001
Considering we only have 16 bits - we would only see the last 16 bits which is: 1100011101000001
1100011101000001 in hex is 0xc741.
This is 51009 that you are seeing.
You are being limited by the type size of char in c#.
Hope this answer cleared things up!
By enabling the checked context in your project or by adding it this way in your code:
checked {
char chch0 = (char)0x11a1;
Console.WriteLine(chch0);
chch0 = (char)(chch0 * chch0); // OverflowException
Console.WriteLine(chch0);
int hgvvv = chch0;
Console.WriteLine(hgvvv);
}
You will see that you will get an OverflowException, because the char type (2 bytes big) is only able to store values up to Char.MaxValue = 0xFFFF.
The value you expect (20367169) is larger than than 0xFFFF and you basically get only the two least significate bytes the type was able to store. Which is:
Console.WriteLine(20367169 & 0xFFFF);
// prints: 51009

16-bit signed from ASCII hexadecimal sequence [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to convert numbers between hexadecimal and decimal in C#?
I'm having struggle with converting to signed int using c#
lets say I have the fallowing string:
AAFE B4FE B8FE
here we have 3 samples. each sample (signed 16 bits) is written as an ASCII hexadecimal sequence of 4 digits (2x2 digits/byte).
any suggestions?
Thank you.
If you need to specify the endian-ness of the parsed values (instead of assuming that they are in little-endian byte order), then you need to place each byte in the appropriate place within the resulting short.
Note that exceptions will be thrown in HexToByte if the string values are not well formatted.
static byte HexToByte(string value, int offset)
{
string hex = value.Substring(offset, 2);
return byte.Parse(hex, NumberStyles.HexNumber);
}
static short HexToSigned16(string value, bool isLittleEndian)
{
byte first = HexToByte(value, 0);
byte second = HexToByte(value, 2);
if (isLittleEndian)
return (short)(first | (second << 8));
else
return (short)(second | (first << 8));
}
...
string[] values = "AAFE B4FE B8FE".Split();
foreach (string value in values)
{
Console.WriteLine("{0} == {1}", value, HexToSigned16(value, true));
}
You can parse strings of numbers of any standard base using overloads in the Convert class that accept a base. In this case, you'd probably want this overload.
Then you could do something like this:
var groupings = "AAFE B4FE B8FE".Split();
var converted = groupings
.Select(grouping => Convert.ToInt16(grouping, 16))
.ToList();

Arbitrarily large integers in C#

How can I implement this python code in c#?
Python code:
print(str(int(str("e60f553e42aa44aebf1d6723b0be7541"), 16)))
Result:
305802052421002911840647389720929531201
But in c# I have problems with big digits.
Can you help me?
I've got different results in python and c#. Where can be mistake?
Primitive types (such as Int32, Int64) have a finite length that it's not enough for such big number. For example:
Data type Maximum positive value
Int32 2,147,483,647
UInt32 4,294,967,295
Int64 9,223,372,036,854,775,808
UInt64 18,446,744,073,709,551,615
Your number 305,802,052,421,002,911,840,647,389,720,929,531,201
In this case to represent that number you would need 128 bits. With .NET Framework 4.0 there is a new data type for arbitrarily sized integer numbers System.Numerics.BigInteger. You do not need to specify any size because it'll be inferred by the number itself (it means that you may even get an OutOfMemoryException when you perform, for example, a multiplication of two very big numbers).
To come back to your question, first parse your hexadecimal number:
string bigNumberAsText = "e60f553e42aa44aebf1d6723b0be7541";
BigInteger bigNumber = BigInteger.Parse(bigNumberAsText,
NumberStyles.AllowHexSpecifier);
Then simply print it to console:
Console.WriteLine(bigNumber.ToString());
You may be interested to calculate how many bits you need to represent an arbitrary number, use this function (if I remember well original implementation comes from C Numerical Recipes):
public static uint GetNeededBitsToRepresentInteger(BigInteger value)
{
uint neededBits = 0;
while (value != 0)
{
value >>= 1;
++neededBits;
}
return neededBits;
}
Then to calculate the required size of a number wrote as string:
public static uint GetNeededBitsToRepresentInteger(string value,
NumberStyles numberStyle = NumberStyles.None)
{
return GetNeededBitsToRepresentInteger(
BigInteger.Parse(value, numberStyle));
}
If you just want to be able to use larger numbers there is BigInteger which has a lot of digits.
To find the number of bits you need to store a BigInteger N, you can use:
BigInteger N = ...;
int nBits = Mathf.CeilToInt((float)BigInteger.Log(N, 2.0));

What's the best way to represent System.Double as a sortable string?

In data formats where all underlying types are strings, numeric types must be converted to a standardized string format which can be compared alphabetically. For example, a short for the value 27 could be represented as 00027 if there are no negatives.
What's the best way to represent a double as a string? In my case I can ignore negatives, but I'd be curious how you'd represent the double in either case.
UPDATE
Based on Jon Skeet's suggestion, I'm now using this, though I'm not 100% sure it'll work correctly:
static readonly string UlongFormatString = new string('0', ulong.MaxValue.ToString().Length);
public static string ToSortableString(this double n)
{
return BitConverter.ToUInt64(BitConverter.GetBytes(BitConverter.DoubleToInt64Bits(n)), 0).ToString(UlongFormatString);
}
public static double DoubleFromSortableString(this string n)
{
return BitConverter.Int64BitsToDouble(BitConverter.ToInt64(BitConverter.GetBytes(ulong.Parse(n)), 0));
}
UPDATE 2
I have confirmed what Jon suspected - negatives don't work using this method. Here is some sample code:
void Main()
{
var a = double.MaxValue;
var b = double.MaxValue/2;
var c = 0d;
var d = double.MinValue/2;
var e = double.MinValue;
Console.WriteLine(a.ToSortableString());
Console.WriteLine(b.ToSortableString());
Console.WriteLine(c.ToSortableString());
Console.WriteLine(d.ToSortableString());
Console.WriteLine(e.ToSortableString());
}
static class Test
{
static readonly string UlongFormatString = new string('0', ulong.MaxValue.ToString().Length);
public static string ToSortableString(this double n)
{
return BitConverter.ToUInt64(BitConverter.GetBytes(BitConverter.DoubleToInt64Bits(n)), 0).ToString(UlongFormatString);
}
}
Which produces the following output:
09218868437227405311
09214364837600034815
00000000000000000000
18437736874454810623
18442240474082181119
Clearly not sorted as expected.
UPDATE 3
The accepted answer below is the correct one. Thanks guys!
Padding is potentially rather awkward for doubles, given the enormous range (double.MaxValue is 1.7976931348623157E+308).
Does the string representation still have to be human-readable, or just reversible?
That gives a reversible conversion leading to a reasonably short string representation preserving lexicographic ordering - but it wouldn't be at all obvious what the double value was just from the string.
EDIT: Don't use BitConverter.DoubleToInt64Bits alone. That reverses the ordering for negative values.
I'm sure you can perform this conversion using DoubleToInt64Bits and then some bit-twiddling, but unfortunately I can't get it to work right now, and I have three kids who are desperate to go to the park...
In order to make everything sort correctly, negative numbers need to be stored in ones-complement format instead of sign magnitude (otherwise negatives and positives sort in opposite orders), and the sign bit needs to be flipped (to make negative sort less-than positives). This code should do the trick:
static ulong EncodeDouble(double d)
{
long ieee = System.BitConverter.DoubleToInt64Bits(d);
ulong widezero = 0;
return ((ieee < 0)? widezero: ((~widezero) >> 1)) ^ (ulong)~ieee;
}
static double DecodeDouble(ulong lex)
{
ulong widezero = 0;
long ieee = (long)(((0 <= (long)lex)? widezero: ((~widezero) >> 1)) ^ ~lex);
return System.BitConverter.Int64BitsToDouble(ieee);
}
Demonstration here: http://ideone.com/JPNPY
Here's the complete solution, to and from strings:
static string EncodeDouble(double d)
{
long ieee = System.BitConverter.DoubleToInt64Bits(d);
ulong widezero = 0;
ulong lex = ((ieee < 0)? widezero: ((~widezero) >> 1)) ^ (ulong)~ieee;
return lex.ToString("X16");
}
static double DecodeDouble(string s)
{
ulong lex = ulong.Parse(s, System.Globalization.NumberStyles.AllowHexSpecifier);
ulong widezero = 0;
long ieee = (long)(((0 <= (long)lex)? widezero: ((~widezero) >> 1)) ^ ~lex);
return System.BitConverter.Int64BitsToDouble(ieee);
}
Demonstration: http://ideone.com/pFciY
I believe that a modified scientific notation, with the exponent first, and using underscore for positive, would sort lexically in the same order as numerically.
If you want, you can even append the normal representation, since a suffix won't affect sorting.
Examples
E000M3 +3.0
E001M2.7 +27.0
Unfortunately, it doesn't work for either negative numbers or negative exponents. You could introduce a bias for the exponent, like the IEEE format uses internally.
As it turns out... The org.apache.solr.util package contains the NumberUtils class. This class has static methods that do everything needed to convert doubles (and other data values) to sortable strings (and back). The methods could not be easier to use. A few notes:
Of course, NumberUtils is written in Java (not c#). My guess it that the code could be converted to c#... However, I am not well versed in c#. The source is readily available online.
The resulting strings are not printable (at all).
The comments in the code indicate that all exotic cases, including negative numbers and infinities, should work correctly.
I haven't done any benchmarks... However, based on a quick scan of the code, it should be very fast.
The code below shows what needs to done to use this library.
String key = NumberUtils.double2sortableStr(35.2);

c# byte and bit casting - could this be done better?

I'm building arrays of bytes to be communicated over Bluetooth. These bytes are partly built from enumerated types, such as the following :
public enum Motor
{
A = 0x00,
B = 0x01,
C = 0x02,
AB = 0x03,
AC = 0x04,
BC = 0x05,
}
Later in my code I create a variable called MyMotor of type MyMotor.B. I then pass this variable to a method in which I build my byte array.
My issue is that the software I'm communicating with via Bluetooth expects the hex value of the enumerated value as a string, ie MyMotor.B = byte 0x01 = dec 1 = hex 31. However casting MyMotor directly to a char would result in it evaluating to it's enumerated value ie MyMotor = B = hex 42.
For various reasons I can't change my enurated list, so I've settled on what feels like a very hacked together two line section of code :
String motorchar = Convert.ToString(Convert.ToInt32(MyMotor)); // convert to temp var
command[5] = (byte)(motorchar[0]); // store hex value of var
It works as I'd like ie command[5] = hex31
I wonder if there's a better way. All the articles I've found talk about dealing with entire byte arrays rather than individual bytes and chars.
chars[0] = (char)('0' + ((byte)myMotor & 0x0F));
chars[1] = (char)('0' + (((byte)myMotor & 0xF0) >> 4));
This needs a little more tweaking for hexadecimal, though.
If your other app expects a string then provide one.
Make an array of strings to hold the values (which you know) and use the int value of the enum as an index into that array.
Unless I am missing something your two line code is equivalent to just calling;
BitConverter.ToString(MyMotor);
No?
If you know that your program's values and the values the API expects always differ by some fixed amount (for example, AC = 0x04, but the API wants "4", then you can write a simple conversion:
char c = (char)((int)Motor + '0');
That gets kind of ugly when there are more than 10 values, though. You can special case it for hexadecimal digits, but after that it's pretty bad.
You're better off creating a lookup table, the most general being a dictionary:
Dictionary<Motor, string> MotorLookup = new Dictionary<Motor, string>() {
{ Motor.A, "0" },
{ Motor.B, "1" },
// etc, etc.
};
That's going to be the most flexible and most maintainable.
Why not use:
//If you want the ASCII representation.
// e.g. myMotor == B, then the Byte Decimal value will be 49 or 0x31.
command[5] = (byte)((byte)myMotor).ToString()[0];
or
//If you want the numeric representation:
// e.g. myMotor == B, then the Byte Decimal value will be 1 or 0x01.
command[5] = (byte)myMotor;
I see you using values like "0x01" and "0x05". The "0x" prefix means it's a hexadecimal number, but you never go past 5, so it's exactly the same as using integer values "1" and "5".
I don't see how you're even getting Decimal 1 == Hex 31 or Hex 42 that you mention in your post. The ASCII equivalent of Char '1' is Decimal 49 or Hex 31.

Categories

Resources