I'm trying to convert a string that includes a hex value into its equivalent signed short in C#
for example:
the equivalent hex number of -1 is 0xFFFF (in two bytes)
I want to do the inverse, i.e I want to convert 0xFFFF into -1
I'm using
string x = "FF";
short y = Convert.ToInt16(x,16);
but the output y is 255 instead of -1, I need the signed number equivalent
can anyone help me?
thanks
When your input is "FF" you have the string representation in hex of a single byte.
If you try to assign it to a short (two bytes), the last bit is not considered for applying the sign to the converted number and thus you get the 255 value.
Instead a string representation of "FFFF" represents two bytes where the last bit is set to 1 so the result, if assigned to a signed type like Int16, is negative while, if assigned to an unsigned type like ushort, is 65535-
string number = "0xFFFF";
short n = Convert.ToInt16(number, 16);
ushort u = Convert.ToUInt16(number, 16);
Console.WriteLine(n);
Console.WriteLine(u);
number = "0xFF";
byte b = Convert.ToByte(number, 16);
short x = Convert.ToInt16(number, 16);
ushort z = Convert.ToUInt16(number, 16);
Console.WriteLine(n);
Console.WriteLine(x);
Console.WriteLine(z);
Output:
-1
65535
-1
255
255
You're looking to convert the string representation of a signed byte, not short.
You should use Convert.ToSByte(string) instead.
A simple unit test to demonstrate
[Test]
public void MyTest()
{
short myValue = Convert.ToSByte("FF", 16);
Assert.AreEqual(-1, myValue);
}
Please see http://msdn.microsoft.com/en-us/library/bb311038.aspx for full details on converting between hex strings and numeric values.
Related
so I am outputting the char 0x11a1 by converting it to char
than I multiply 0x11a1 by itself and output it again but I do not get what I expect to get as
by doing this {int hgvvv = chch0;} and outputting to the console I can see that the computer thinks that 0x11a1 * 0x11a1 equals 51009 but it actually equals 20367169
As a result I do not gat what I want.
Could you please explain to me why?
char chch0 = (char)0x11a1;
Console.WriteLine(chch0);
chch0 = (char)(chch0 * chch0);
Console.WriteLine(chch0);
int hgvvv = chch0;
Console.WriteLine(hgvvv);
We know that 1 bytes is 8 bits.
We know that a char in c# is 2 bytes, which would be 16 bits.
If we multiply 0x11a1 X 0x11a1 we get 0x136c741.
0x136c741 in binary is 0001001101101100011101000001
Considering we only have 16 bits - we would only see the last 16 bits which is: 1100011101000001
1100011101000001 in hex is 0xc741.
This is 51009 that you are seeing.
You are being limited by the type size of char in c#.
Hope this answer cleared things up!
By enabling the checked context in your project or by adding it this way in your code:
checked {
char chch0 = (char)0x11a1;
Console.WriteLine(chch0);
chch0 = (char)(chch0 * chch0); // OverflowException
Console.WriteLine(chch0);
int hgvvv = chch0;
Console.WriteLine(hgvvv);
}
You will see that you will get an OverflowException, because the char type (2 bytes big) is only able to store values up to Char.MaxValue = 0xFFFF.
The value you expect (20367169) is larger than than 0xFFFF and you basically get only the two least significate bytes the type was able to store. Which is:
Console.WriteLine(20367169 & 0xFFFF);
// prints: 51009
I need to convert 2 integers to a float number. I'm using the following function to do this conversion:
string hexString = value1 + value2;
uint num = uint.Parse(hexString, NumberStyles.AllowHexSpecifier);
byte[] floatVals = BitConverter.GetBytes(num);
f = BitConverter.ToSingle(floatVals, 0);
The function works fine, except when the second integer is negative. Which gives a small difference in the decimal part.
Example:
16998 and 16673: 57.76 -> Correct
16998 and -11174: 57.54 -> Incorrect (The correct is: 57.71).
What I'm doing to get close to the correct number is multiplying the second integer by -1 to make it positive. My question is how to get the exact value when the second integer is negative?
Reading the manual, it says:
Myy app screenshot:
I have a binary data file and it contains some negative and positive value also and it stored in 2's complement form.
Whenever I try to read those data using BinaryReader class I am always getting positive number.
private double nextByte(BinaryReader reader)
{
byte[] bValue = reader.ReadBytes(2);
StringBuilder hex = new StringBuilder(bValue.Length * 2);
foreach (byte b in bValue)
hex.AppendFormat("{0:x2}", b);
int decValue = int.Parse(hex.ToString(), System.Globalization.NumberStyles.HexNumber);
return Convert.ToDouble(decValue);
}
For Example:
Let's consider data file contains 1011100101011110. Equivalent Decimal value is 47454, and Decimal value of signed 2's complement is -18082.
The BinaryReader.ReadBytes(2) methods will always returns +ve value but I am excepting -18082 value.
The problem is data file contains both +ve and -ve value, So how can I achieve this please any one can help me.
If you want to continue with your crazy conversions you simply need to cast resulting int to short:
int decValue = 47454;
// int.Parse(hex.ToString(), System.Globalization.NumberStyles.HexNumber);
return (short)decValue;
The cast to short will simply trim 2 high bytes of int resulting in representation of negative value, and then resulting short value -18082 will be automatically converted to -18082.0f to match return type.
Note that reading short directly with ReadInt16 is likely what you want.
Please help me to understand the different between Convert and BitConverter. In the below example, the above two methods give me two different answer:
UInt64 x = 0x4028b0a3d70a3d71;
// Use Convert
double d = Convert.ToDouble(x); // d = 4.6231392352297441E+18
// Use BitConverter
byte[] xArray = BitConverter.GetBytes(x);
double d1 = BitConverter.ToDouble(xArray, 0); // d1 = 12.345
Thank you
These two methods are used for different purposes; Convert.ToDouble(x) is equivalent to a cast: (double)x; which can be useful if you need the integer value to be treated as a floating point value, say, for mathematical operations:
int x = 7;
Console.WriteLine(x / 3); // 2
Console.WriteLine(Convert.ToDouble(x) / 3); // 2.3333333333333335
Console.WriteLine((double)x / 3); // 2.3333333333333335
BitConverter class is useful if you want to transmit the value over the network as a series of bytes; you'd use the BitConverter.GetBytes() on the sending side, and BitConverter.ToOriginalType() on the receiving end:
double x = 12.345;
byte[] xArray = BitConverter.GetBytes(x);
// Send xArray to another system over the network
// ...on the receiving system, presuming same endianness:
double d1 = BitConverter.ToDouble(xArray, 0); // d1 = 12.345
Now, in your example, let's take a look at what happens to the value of x in both cases:
UInt64 x = 0x4028b0a3d70a3d71;
// Use Convert
double d = Convert.ToDouble(x); // d = 4.6231392352297441E+18
d is a cast of x to double; in decimal form, 0x4028b0a3d70a3d71 = 4,623,139,235,229,744,497 = 4.623139235229744497+18 in scientific notation. No magic here, it's pretty much what you'd expect to happen. Onward.
// Use BitConverter
byte[] xArray = BitConverter.GetBytes(x);
double d1 = BitConverter.ToDouble(xArray, 0); // d1 = 12.345
...what? well, let's see how the double type is stored in memory. According to IEEE 754 specification for double, the format is:
first bit is a sign bit (0 = positive; 1 = negative)
next 11 bits are the exponent
next 52 bits are the significand (well, 53, but only 52 are stored)
Here's the binary representation of 0x4028b0a3d70a3d71, arranged into the 3 sections we need to consider:
0 10000000010 1000101100001010001111010111000010100011110101110001
The following formula is used to convert this storage format to an actual numeric value:
(-1)sign x (1.b51b50...b0)base2 x 2exponent - 1023
Instead of going through this math manually, we can use this awesome floating point converter; here's the snapshot of the result:
See the decimal result? 12.345, same as what you're getting with BitConverter.ToDouble(xArray, 0) - but certainly not the same as the casting operation performed by Convert.ToDouble(x)
The first:
you convert int hex representation to double
The second:
you treat the value in x as a bit representation of a double, not integer.
i need your help , here is a part of my code but there is a problem that i cant solve .
Plese help me;
This is what i have done for now , i can get positive integer in 16 bits binary form
`
Console.WriteLine("Enter an integer : ");
string inputnumber = Console.ReadLine();
int num = int.Parse(inputnumber);
string value = Convert.ToString(num, 2).PadLeft(16, '0');
Console.WriteLine("The bits are : {0}", value);
Console.ReadKey();`
AND the issue is how will i get negative value of an integer in 16 bits binary form
like; when i input 5 , i can get : 0000000000000101
and i need -5 -------------> 1111111111111011
In C# int is a 32-bit type. You should use short (a 16-bit type) instead. For positive numbers up to 32767 the first (lower) 16 bits of int and short are the same, but for negative numbers it's different.
short num = short.Parse(inputnumber);
This is a correct behavior since it is stored in that way in a computer. That is the so called two's complement where the most significant bit (the most left) tells you that this is a negative number. Also keep in mind that int contains 32 bits