I am trying to convert hex data to signed int/decimal and can't figure out what I'm doing wrong.
I need FE to turn into -2.
I'm using Convert.ToInt32(fields[10], 16) but am getting 254 instead of -2.
Any assistance would be greatly appreciated.
int is 32 bits wide, so 0xFE is REALLY being interpreted as 0x000000FE for the purposes of Convert.ToInt32(string, int), which is equal to 254 in the space of int.
Since you're wanting to work with a signed byte range of values , use Convert.ToSByte(string, int) instead (byte is unsigned by default, so you need the sbyte type instead).
Convert.ToSByte("FE",16)
Interpret the value as a signed byte:
sbyte value = Convert.ToSByte("FE", 16); //-2
Well the bounds of Int32 are -2 147 483 648 to 2 147 483 647. So FE matches 254.
In case you want to do a wrap around over 128, the most elegant solution is proably to use a signed byte (sbyte):
csharp> Convert.ToSByte("FE",16);
-2
Related
So the issue is that when using c# the char is 4 bytes so "abc" is (65 0 66 0 67 0).
When inputing that to a wstring in c++ thru sending it in a socket i get the following output a.
How i am able to convert such a string to a c++ string?
Sounds like you need ASCII or UTF-8 encoding instead of Unicode.
65 0 66 0 67 0 is only going to get you the A, since the next zero is interpreted as a null termination character in C++.
Strategies for converting Unicode to ASCII can be found here.
using c# the char is 4 bytes
No, in CSharp Strings are encoded in UTF16. Code units need at least two bytes in UTF16. For simple charachters a single code unit can represent a code point (e.g. 65 0).
On Windows wstring is usually UTF16 (2-4 Bytes) encoded, too. But on Unix/Linux wstring uses usually UTF32-Encoding (always 4 Bytes).
The Unicode code Point has the same numerical value compared to ASCII - therefore UTF-16 encoded ASCII text looks often like this: {num} 0 {num} 0 {num} 0...
See the details here: (https://en.wikipedia.org/wiki/UTF-16)
Could you show us some Code, how you constructed your wstring object?
The null byte is critical here, because it was the end marker for ASCII / ANSI Strings.
I have been able to solve the issue by using a std::u16string.
Here is some example code
std::vector<char> data = { 65, 0, 66, 0, 67, 0 };
std::u16string string(&data[0], data.size() / 2);
// now string should be encoded right
I'm reading CLR via C# by Jeffrey Richter and on page 115 there is an example of an overflow resulting from arithmetic operations on primitives. Can someone pls explain?
Byte b = 100;
b = (Byte) (b+200); // b now contains 44 (or 2C in Hex).
I understand that there should be an overflow, since byte is an unsigned 8-bit value, but why does its value equal 44?
100+200 is 300; 300 is (in bits):
1 0010 1100
Of this, only the last 8 is kept, so:
0010 1100
which is: 44
The binary representation of 300 is 100101100. That's nine bits, one more than the byte type has room for. Therefore the higher bit is discarded, causing the result to be 00101100. When you translate this value to decimal you get 44.
A byte can hold represent integers in the range between 0 and 255 (inclusive). When you pass a value greater than the 255, like 300, the value of 300-256=44 is stored. This happens because a byte is consisted of 8 bits and each bit can either 0 or 1. So you can represent 2^8=256 integers using a byte, starting from 0.
Actually, you have to divide your number with the 256. The remainder of this can only be represented by a byte.
I am given a 11 bit signed hex value that must be stored in an int32 data type. When I cast the hex value to an int 32, the 11 bit hex value is obviously smaller then the int32 so it 0 fills the higher order bits.
Basically i need to be able to store 11 bit signed values in an int32 or 16 from a given 11 bit hex value.
For example.
String hex = 0x7FF;
If i cast this to int32 using Int.parse(hex, System.Globalization.Numbers.Hexvalue);
I get 2047 when it should be -1 (according to the 11 bit binary 111 1111)
How can I accomplish this in c#?
It's actually very simple, just two shifts. Shifting right keeps the sign, so that's useful. In order to use it, the sign of the 11 bit thing has to be aligned with the sign of the int:
x <<= -11;
Then do the right shift:
x >>= -11;
That's all.
The -11, which may seem odd, is just a shorter way to write 32 - 11. That's not in general the same thing, but shift counts are masked by 31 (ints) or 63 (longs), so in this case you can use that shortcut.
string hex = "0x7FF";
var i = (Convert.ToInt32(hex, 16) << 21) >> 21;
How can I convert a byte to a number in C#? For example, 00000001 to 1, 00000011 to 3, 00001011 to 11. i have a byte array with numbers encoded as binary bytes, but I need to get those numbers and append them to a string.
You can do this.
// If the system architecture is little-endian (that is, little end first),
// reverse the byte array.
if (BitConverter.IsLittleEndian)
Array.Reverse(bytes);
int i = BitConverter.ToInt32(bytes, 0);
where bytes is your bytes[]. You would want to take a look here
In C# byte is already an unsigned number ranging from 0 to 255. You can freely assign them to integers, or convert to other numeric types.
Bytes are numbers.
If you want to get the numeric value of a single byte, just call ToString().
If you have an array of bytes that are part of a little-endian single number, you can use the BitConverter class to convert them to a 16, 32, or 64 bit signed or unsigned integer.
you can use built-in Convert
foreach (byte b in array) {
long dec = Convert.ToInt64(b,2);
}
Just doing this for fun and I was reading the pseudo-code on Wikipedia and it says when pre-processing to append the bit '1' to the message and then append enough '0' bits to the resulting message length modulus 512 is 448. Then append the length of the message in bits as a 64-bit big-endian integer.
Okay. I'm not sure how to append just a '1' bit but I figure it could be possible to just append 128 (1000 0000) but that wouldn't work in the off chance the resulting message length modulus 512 was already 448 without all those extra 0's. In which case I'm not sure how to append just a 1 because I'd need to deal with at least bytes. Is it possible in C#?
Also, is there a built-in way to append a big-endian integer because I believe my system is little-endian by default.
It's defined in such a way that you only need to deal with bytes if the message is an even number of bytes. If the message length (mod 64) is 56, then append one byte of 0b10000000, folowed by 63 0 bytes, followed by the length. Otherwise, append one byte of 0b10000000, followed by 0 to 62 0 bytes, followed by the length.
You might check out the BitArray class in System.Collections. One of the ctor overloads takes an array of bytes, etc.