This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to convert numbers between hexadecimal and decimal in C#?
I'm having struggle with converting to signed int using c#
lets say I have the fallowing string:
AAFE B4FE B8FE
here we have 3 samples. each sample (signed 16 bits) is written as an ASCII hexadecimal sequence of 4 digits (2x2 digits/byte).
any suggestions?
Thank you.
If you need to specify the endian-ness of the parsed values (instead of assuming that they are in little-endian byte order), then you need to place each byte in the appropriate place within the resulting short.
Note that exceptions will be thrown in HexToByte if the string values are not well formatted.
static byte HexToByte(string value, int offset)
{
string hex = value.Substring(offset, 2);
return byte.Parse(hex, NumberStyles.HexNumber);
}
static short HexToSigned16(string value, bool isLittleEndian)
{
byte first = HexToByte(value, 0);
byte second = HexToByte(value, 2);
if (isLittleEndian)
return (short)(first | (second << 8));
else
return (short)(second | (first << 8));
}
...
string[] values = "AAFE B4FE B8FE".Split();
foreach (string value in values)
{
Console.WriteLine("{0} == {1}", value, HexToSigned16(value, true));
}
You can parse strings of numbers of any standard base using overloads in the Convert class that accept a base. In this case, you'd probably want this overload.
Then you could do something like this:
var groupings = "AAFE B4FE B8FE".Split();
var converted = groupings
.Select(grouping => Convert.ToInt16(grouping, 16))
.ToList();
Related
Currently i am using Long integer type. I used the following to convert from/to binary/number:
Convert.ToInt64(BinaryString, 2); //Convert binary string of base 2 to number
Convert.ToString(LongNumber, 2); //Convert long number to binary string of base 2
Now the numbers i am using have exceeded 64 bit, so is started using BigInteger. I can't seem to find the equivalent of the code above.
How can i convert from a BinaryString that have over 64bits to a BigInteger number and vice versa ?
Update:
The references in the answer contains the answer i want but i am having some trouble in the conversion from Number to Binary.
I have used the following code which is available in the first reference:
public static string ToBinaryString(this BigInteger bigint)
{
var bytes = bigint.ToByteArray();
var idx = bytes.Length - 1;
// Create a StringBuilder having appropriate capacity.
var base2 = new StringBuilder(bytes.Length * 8);
// Convert first byte to binary.
var binary = Convert.ToString(bytes[idx], 2);
// Ensure leading zero exists if value is positive.
if (binary[0] != '0' && bigint.Sign == 1)
{
base2.Append('0');
}
// Append binary string to StringBuilder.
base2.Append(binary);
// Convert remaining bytes adding leading zeros.
for (idx--; idx >= 0; idx--)
{
base2.Append(Convert.ToString(bytes[idx], 2).PadLeft(8, '0'));
}
return base2.ToString();
}
The result i got is wrong:
100001000100000000000100000110000100010000000000000000000000000000000000 ===> 2439583056328331886592
2439583056328331886592 ===> 0100001000100000000000100000110000100010000000000000000000000000000000000
If you put the resulted binary string under each other, you will notice that the conversion is correct and that the problem is that there is a leading zero from the left:
100001000100000000000100000110000100010000000000000000000000000000000000
0100001000100000000000100000110000100010000000000000000000000000000000000
I tried reading the explanation provided in the code and changing it but no luck.
Update 2:
I was able to solve it by changing the following in the code:
// Ensure leading zero exists if value is positive.
if (binary[0] != '0' && bigint.Sign == 1)
{
base2.Append('0');
// Append binary string to StringBuilder.
base2.Append(binary);
}
Unfortunately, there is nothing built-in in the .NET framework.
Fortunately, the StackOverflow community has already solved both problems:
BigInteger -> Binary: BigInteger to Hex/Decimal/Octal/Binary strings?
Binary -> BigInteger: C# Convert large binary string to decimal system
There is a good reference on MSDN about BigIntegers. Can you check it?
https://msdn.microsoft.com/en-us/library/system.numerics.biginteger(v=vs.110).aspx
Also there is a post to convert from binary to biginteger Conversion of a binary representation stored in a list of integers (little endian) into a Biginteger
This example is from MSDN.
string positiveString = "91389681247993671255432112000000";
string negativeString = "-90315837410896312071002088037140000";
BigInteger posBigInt = 0;
BigInteger negBigInt = 0;
try {
posBigInt = BigInteger.Parse(positiveString);
Console.WriteLine(posBigInt);
}
catch (FormatException)
{
Console.WriteLine("Unable to convert the string '{0}' to a BigInteger value.",
positiveString);
}
if (BigInteger.TryParse(negativeString, out negBigInt))
Console.WriteLine(negBigInt);
else
Console.WriteLine("Unable to convert the string '{0}' to a BigInteger value.",
negativeString);
// The example displays the following output:
// 9.1389681247993671255432112E+31
// -9.0315837410896312071002088037E+34
There are some notations to write numbers in C# that tell if what you wrote is float, double, integer and so on.
So I would like to write a binary number, how do I do that?
Say I have a byte:
byte Number = 10011000 //(8 bits)
How should I write it without having the trouble to know that 10011000 in binary = 152 in decimal?
P.S.: Parsing a string is completely out of question (I need performance)
as of c# 6 c# 7 you can use 0b prefix to get binary similar to the 0x for hex
int x = 0b1010000; //binary value of 80
int seventyFive = 0b1001011; //binary value of 75
give it a shot
You can write this:
int binaryNotation = 0b_1001_1000;
In C# 7.0 and later, you can use the underscore '_' as a digit seperator including decimal, binary, or hexadecimal notation, to improve legibility.
There's no way to do it other than parsing a string, I'm afraid:
byte number = (byte) Convert.ToInt32("10011000", 2);
Unfortunately you will be unable to assign constant values like that, of course.
If you find yourself doing that a lot, I guess you could write an extension method on string to make things more readable:
public static class StringExt
{
public static byte AsByte(this string self)
{
return (byte)Convert.ToInt32(self, 2);
}
}
Then the code would look like this:
byte number = "10011000".AsByte();
I'm not sure that would be a good idea though...
Personally, I just use hex initializers, e.g.
byte number = 0x98;
I'm trying to convert a string that includes a hex value into its equivalent signed short in C#
for example:
the equivalent hex number of -1 is 0xFFFF (in two bytes)
I want to do the inverse, i.e I want to convert 0xFFFF into -1
I'm using
string x = "FF";
short y = Convert.ToInt16(x,16);
but the output y is 255 instead of -1, I need the signed number equivalent
can anyone help me?
thanks
When your input is "FF" you have the string representation in hex of a single byte.
If you try to assign it to a short (two bytes), the last bit is not considered for applying the sign to the converted number and thus you get the 255 value.
Instead a string representation of "FFFF" represents two bytes where the last bit is set to 1 so the result, if assigned to a signed type like Int16, is negative while, if assigned to an unsigned type like ushort, is 65535-
string number = "0xFFFF";
short n = Convert.ToInt16(number, 16);
ushort u = Convert.ToUInt16(number, 16);
Console.WriteLine(n);
Console.WriteLine(u);
number = "0xFF";
byte b = Convert.ToByte(number, 16);
short x = Convert.ToInt16(number, 16);
ushort z = Convert.ToUInt16(number, 16);
Console.WriteLine(n);
Console.WriteLine(x);
Console.WriteLine(z);
Output:
-1
65535
-1
255
255
You're looking to convert the string representation of a signed byte, not short.
You should use Convert.ToSByte(string) instead.
A simple unit test to demonstrate
[Test]
public void MyTest()
{
short myValue = Convert.ToSByte("FF", 16);
Assert.AreEqual(-1, myValue);
}
Please see http://msdn.microsoft.com/en-us/library/bb311038.aspx for full details on converting between hex strings and numeric values.
Consider the following code ( .Dump() in LinqPad simply writes to the console):
var s = "𤭢"; //3 byte code point. 4 byte UTF32 encoded
s.Dump();
s.Length.Dump(); // 2
TextReader sr = new StringReader("𤭢");
int i;
while((i = sr.Read()) >= 0)
{
// notice here we are yielded two
// 2 byte values, but as ints
i.ToString("X").Dump(); // D852, DF62
}
Given the outcome above, why does TextReader.Read() return an int and not a char. Under what circumstances might it read a value greater than 2 bytes?
TextReader.Read() will never read greater than 2 bytes; however, it returns -1 to mean "no more characters to read" (end of string). Therefore, its return type needs to go up to Int32 (4 bytes) from Char (2 bytes) to be able to express the full Char range plus -1.
TextReader.Read() probably uses int to allow returning -1 when reaching the end of the text:
The next character from the text reader, or -1 if no more characters are available. The default implementation returns -1.
And, the Length is 2 because Strings are UTF-16 sequences, which require surrogate pairs to represent code points above U+FFFF.
{ 0xD852, 0xDF62 } <=> U+24B62 (𤭢)
You can get the UTF-32 code point from them with Char.ConvertToUtf32():
Char.ConvertToUtf32("𤭢", 0).ToString("X").Dump(); // 24B62
This question already has answers here:
Encoding used in cast from char to byte
(3 answers)
Closed 9 years ago.
I was wondering if there's any difference between converting characters to byte with Encoding.UTF8.GetBytes or manually using (byte) before characters and convert them to byte?
For an example, look at following code:
public static byte[] ConvertStringToByteArray(string str)
{
int i, n;
n = str.Length;
byte[] x = new byte[n];
for (i = 0; i < n; i++)
{
x[i] = (byte)str[i];
}
return x;
}
var arrBytes = ConvertStringToByteArray("Hello world");
or
var arrBytes = Encoding.UTF8.GetBytes("Hello world");
I liked the question so I executed your code on an ANSI text in Hebrew I read from a text file.
The text was "שועל"
string text = System.IO.File.ReadAllText(#"d:\test.txt");
var arrBytes = ConvertStringToByteArray(text);
var arrBytes1 = Encoding.UTF8.GetBytes(text);
The results were
As you can see there is a difference when the code point of any of your characters exceeds the 0-255 range of byte.
Your ConvertStringToByteArray method is incorrect.
you are casting each char to byte. char's numerical value is its Unicode code point which could be larger than a byte, so the casting will often result in an arithmetic overflow.
Your example works because you've used characters with code points within the byte range.
when wanna cast characters that have encoding, you cant use first one, and you must say chose encoding standard
Yes there is a difference. All .Net strings are stored as UTF16 LE.
Use this code to make a test string, so you get high order bytes in your chars, i.e chars that have a different representation in UTF8 and UTF16.
var testString = new string(
Enumerable.Range(char.MinValue, char.MaxValue - char.MinValue)
.Select(Convert.ToChar)
.ToArray());
This makes a string with every possible char value. If you do
ConvertStringToByteArray(testString).SequenceEqual(
Encoding.UTF8.GetBytes(testString));
It will return false, demonstrating that the results differ.