c# byte and bit casting - could this be done better? - c#

I'm building arrays of bytes to be communicated over Bluetooth. These bytes are partly built from enumerated types, such as the following :
public enum Motor
{
A = 0x00,
B = 0x01,
C = 0x02,
AB = 0x03,
AC = 0x04,
BC = 0x05,
}
Later in my code I create a variable called MyMotor of type MyMotor.B. I then pass this variable to a method in which I build my byte array.
My issue is that the software I'm communicating with via Bluetooth expects the hex value of the enumerated value as a string, ie MyMotor.B = byte 0x01 = dec 1 = hex 31. However casting MyMotor directly to a char would result in it evaluating to it's enumerated value ie MyMotor = B = hex 42.
For various reasons I can't change my enurated list, so I've settled on what feels like a very hacked together two line section of code :
String motorchar = Convert.ToString(Convert.ToInt32(MyMotor)); // convert to temp var
command[5] = (byte)(motorchar[0]); // store hex value of var
It works as I'd like ie command[5] = hex31
I wonder if there's a better way. All the articles I've found talk about dealing with entire byte arrays rather than individual bytes and chars.

chars[0] = (char)('0' + ((byte)myMotor & 0x0F));
chars[1] = (char)('0' + (((byte)myMotor & 0xF0) >> 4));
This needs a little more tweaking for hexadecimal, though.

If your other app expects a string then provide one.
Make an array of strings to hold the values (which you know) and use the int value of the enum as an index into that array.

Unless I am missing something your two line code is equivalent to just calling;
BitConverter.ToString(MyMotor);
No?

If you know that your program's values and the values the API expects always differ by some fixed amount (for example, AC = 0x04, but the API wants "4", then you can write a simple conversion:
char c = (char)((int)Motor + '0');
That gets kind of ugly when there are more than 10 values, though. You can special case it for hexadecimal digits, but after that it's pretty bad.
You're better off creating a lookup table, the most general being a dictionary:
Dictionary<Motor, string> MotorLookup = new Dictionary<Motor, string>() {
{ Motor.A, "0" },
{ Motor.B, "1" },
// etc, etc.
};
That's going to be the most flexible and most maintainable.

Why not use:
//If you want the ASCII representation.
// e.g. myMotor == B, then the Byte Decimal value will be 49 or 0x31.
command[5] = (byte)((byte)myMotor).ToString()[0];
or
//If you want the numeric representation:
// e.g. myMotor == B, then the Byte Decimal value will be 1 or 0x01.
command[5] = (byte)myMotor;
I see you using values like "0x01" and "0x05". The "0x" prefix means it's a hexadecimal number, but you never go past 5, so it's exactly the same as using integer values "1" and "5".
I don't see how you're even getting Decimal 1 == Hex 31 or Hex 42 that you mention in your post. The ASCII equivalent of Char '1' is Decimal 49 or Hex 31.

Related

representing a hexadecimal value by converting it to char

so I am outputting the char 0x11a1 by converting it to char
than I multiply 0x11a1 by itself and output it again but I do not get what I expect to get as
by doing this {int hgvvv = chch0;} and outputting to the console I can see that the computer thinks that 0x11a1 * 0x11a1 equals 51009 but it actually equals 20367169
As a result I do not gat what I want.
Could you please explain to me why?
char chch0 = (char)0x11a1;
Console.WriteLine(chch0);
chch0 = (char)(chch0 * chch0);
Console.WriteLine(chch0);
int hgvvv = chch0;
Console.WriteLine(hgvvv);
We know that 1 bytes is 8 bits.
We know that a char in c# is 2 bytes, which would be 16 bits.
If we multiply 0x11a1 X 0x11a1 we get 0x136c741.
0x136c741 in binary is 0001001101101100011101000001
Considering we only have 16 bits - we would only see the last 16 bits which is: 1100011101000001
1100011101000001 in hex is 0xc741.
This is 51009 that you are seeing.
You are being limited by the type size of char in c#.
Hope this answer cleared things up!
By enabling the checked context in your project or by adding it this way in your code:
checked {
char chch0 = (char)0x11a1;
Console.WriteLine(chch0);
chch0 = (char)(chch0 * chch0); // OverflowException
Console.WriteLine(chch0);
int hgvvv = chch0;
Console.WriteLine(hgvvv);
}
You will see that you will get an OverflowException, because the char type (2 bytes big) is only able to store values up to Char.MaxValue = 0xFFFF.
The value you expect (20367169) is larger than than 0xFFFF and you basically get only the two least significate bytes the type was able to store. Which is:
Console.WriteLine(20367169 & 0xFFFF);
// prints: 51009

c# convert 2 bytes into int value

First, I had read many posts and tried BitConverter methods for the conversion, but I haven't got the desired result.
From a 2 byte array of:
byte[] dateArray = new byte[] { 0x07 , 0xE4 };
Y need to get an integer with value 2020. So, the decimal of 0x7E4.
Following method does not returning the desired value,
int i1 = BitConverter.ToInt16(dateArray, 0);
The endianess tells you how numbers are stored on your computer. There are two possibilities: Little endian and big endian.
Big endian means the biggest byte is stored first, i.e. 2020 would become 0x07, 0xE4.
Little endian means the lowest byte is stored first, i.e. 2020 would become 0xE4, 0x07.
Most computers are little endian, hence the other way round a human would expect. With BitConverter.IsLittleEndian, you can check which type of endianess your computer has. Your code would become:
byte[] dateArray = new byte[] { 0x07 , 0xE4 };
if(BitConverter.IsLittleEndian)
{
Array.Reverse(dataArray);
}
int i1 = BitConverter.ToInt16(dateArray, 0);
dateArray[0] << 8 | dateArray[1]

Binary notation for writing bits - C#

There are some notations to write numbers in C# that tell if what you wrote is float, double, integer and so on.
So I would like to write a binary number, how do I do that?
Say I have a byte:
byte Number = 10011000 //(8 bits)
How should I write it without having the trouble to know that 10011000 in binary = 152 in decimal?
P.S.: Parsing a string is completely out of question (I need performance)
as of c# 6 c# 7 you can use 0b prefix to get binary similar to the 0x for hex
int x = 0b1010000; //binary value of 80
int seventyFive = 0b1001011; //binary value of 75
give it a shot
You can write this:
int binaryNotation = 0b_1001_1000;
In C# 7.0 and later, you can use the underscore '_' as a digit seperator including decimal, binary, or hexadecimal notation, to improve legibility.
There's no way to do it other than parsing a string, I'm afraid:
byte number = (byte) Convert.ToInt32("10011000", 2);
Unfortunately you will be unable to assign constant values like that, of course.
If you find yourself doing that a lot, I guess you could write an extension method on string to make things more readable:
public static class StringExt
{
public static byte AsByte(this string self)
{
return (byte)Convert.ToInt32(self, 2);
}
}
Then the code would look like this:
byte number = "10011000".AsByte();
I'm not sure that would be a good idea though...
Personally, I just use hex initializers, e.g.
byte number = 0x98;

How do I make BigInteger see the binary representation of this Hex string correctly?

The problem
I have a byte[] that is converted to a hex string, and then that string is parsed like this BigInteger.Parse(thatString,NumberSyles.Hexnumber).
This seems wasteful since BigInteger is able to accept a byte[], as long as the two's complement is accounted for.
An working (inefficient) example
According to MSDN the most significant bit of the last byte should be zero in order for the following hex number be a positive one. The following is an example of a hex number that has this issue:
byte[] ripeHashNetwork = GetByteHash();
foreach (var item in ripeHashNetwork)
{
Console.Write(item + "," );
}
// Output:
// 0,1,9,102,119,96,6,149,61,85,103,67,158,94,57,248,106,13,39,59,238,214,25,103,246
// Convert to Hex string using this http://stackoverflow.com/a/624379/328397
// Output:
// 00010966776006953D5567439E5E39F86A0D273BEED61967F6`
Okay, let's pass that string into the static method of BigInteger:
BigInteger bi2 = BigInt.Parse(thatString,NumberSyles.Hexnumber);
// Output bi2.ToString() ==
// {25420294593250030202636073700053352635053786165627414518}
Now that I have a baseline of data, and known conversions that work, I want to make it better/faster/etc.
A not working (efficient) example
Now my goal is to round-trip a byte[] into BigInt and make the result look like 25420294593250030202636073700053352635053786165627414518. Let's get started:
So according to MSDN I need a zero in my last byte to avoid my number from being seen as a two's compliment. I'll add the zero and print it out to be sure:
foreach (var item in ripeHashNetwork)
{
Console.Write(item + "," );
}
// Output:
// 0,1,9,102,119,96,6,149,61,85,103,67,158,94,57,248,106,13,39,59,238,214,25,103,246,0
Okay, let's pass that byte[] into the constructor of BigInteger:
BigInteger bi2 = new BigInteger(ripeHashNetwork);
// Output bi2.ToString() ==
// {1546695054495833846267861247985902403343958296074401935327488}
What I skipped over is the sample of what bigInt does to my byte array if I don't add the trailing zero. What happens is that I get a negative number which is wrong. I'll post that if you want.
So what am I doing wrong?
When you are going via the hex string, the first byte of your array is becoming the most significant byte of the resulting BigInteger.
When you are adding a trailing zero, the last bye of your array is the most significant.
I'm not sure which case is right for you, but that's why you're getting different answers.
From MSDN "The individual bytes in the value array should be in little-endian order, from lowest-order byte to highest-order byte". So the mistake is the order of bytes:
BigInteger bi2 = new BigInteger(ripeHashNetwork.Reverse().ToArray<byte>());

Convert large number to two bytes in C#

I'm trying to convert a number from a textbox into 2 bytes which can then be sent over serial. The numbers range from 500 to -500. I already have a setup so I can simply send a string which is then converted to a byte. Here's a example:
send_serial("137", "1", "244", "128", "0")
The textbox number will go in the 2nd and 3rd bytes
This will make my Roomba (The robot that all this code is for) drive forward at a velocity of 500 mm/s. The 1st number sent tells the roomba to drive, 2nd and 3rd numbers are the velocity and the 4th and 5th numbers are the radius of the turn (between 2000 and -2000, also has a special case where 32768 is straight).
var value = "321";
var shortNumber = Convert.ToInt16(value);
var bytes = BitConverter.GetBytes(shortNumber);
Alternatively, if you require Big-Endian ordering:
var bigEndianBytes = new[]
{
(byte) (shortNumber >> 8),
(byte) (shortNumber & byte.MaxValue)
};
Assume you are using System.IO.Ports.SerialPort, you will write using SerialPort.Write(byte[], int, int) to send the data.
In case if your input is like this: 99,255, you will do this to extract two bytes:
// Split the string into two parts
string[] strings = textBox1.text.Split(',');
byte byte1, byte2;
// Make sure it has only two parts,
// and parse the string into a byte, safely
if (strings.Length == 2
&& byte.TryParse(strings[0], System.Globalization.NumberStyles.Integer, System.Globalization.CultureInfo.InvariantCulture, out byte1)
&& byte.TryParse(strings[1], System.Globalization.NumberStyles.Integer, System.Globalization.CultureInfo.InvariantCulture, out byte2))
{
// Form the bytes to send
byte[] bytes_to_send = new byte[] { 137, byte1, byte2, 128, 0 };
// Writes the data to the serial port.
serialPort1.Write(bytes_to_send, 0, bytes_to_send.Length);
}
else
{
// Show some kind of error message?
}
Here I assume your "byte" is from 0 to 255, which is the same as C#'s byte type. I used byte.TryParse to parse the string into a byte.

Categories

Resources