expressing hex value in 2's complement - c#

i have a string hex value, and i need to express it in 2's complement.
string hx = "FF00";
what i did is, converting it to binary:
string h = Convert.ToString(Convert.ToInt32(hx, 16), 2 );
then inverting it, but i couldn't use the NOT operator.
is there any short way to invert the bits and then adding 1 (2's complement operation)?

The answer might depend on whether or not the bit width of the value is important to you.
The short answer is:
string hx = "FF00";
uint intVal = Convert.ToUInt32(hx, 16); // intVal == 65280
uint twosComp = ~v + 1; // twosComp == 4294902016
string h = string.Format("{0:X}", twosComp); // h == "FFFF0100"
The value of h is then "FFFF0100" which is the 32-bit 2's complement of hx. If you were expecting '100' then you need to use 16-bit calculations:
string hx = "FF00";
ushort intVal = Convert.ToUInt16(hx, 16); // intVal = 65280
ushort twosComp = (ushort)(~v + 1); // twosComp = 256
string h = string.Format("{0:X}", twosComp); // h = "100"
Bear in mind that uint is an alias for UInt32 and ushort aliases the UInt16 type. For clarity in this type of operation you'd probably be better off using the explicit names.

Two complment is really simple:
int value = 100;
value = ~value // NOT
value = value + 1;
//Now value is -100
Remember that a two complement system requires inverting and adding plus 1.
In hex:
int value = 0x45;
value = ~value // NOT
value = value + 1;

Related

Binary, Int32, and Hex conversions inquiry

The following code is a work-in-progress that I am also taking time with to try and learn some more about converting between bits, hex, and Int;
A lot of this is obviously repetitive operations since we're doing the same thing to 7 different "packages," so feel free to gloss over the repeats (just wanted to have entire code structure up to maybe answer some questions ahead of time).
/* Pack bits into containers to send them as 32-bit (4 bytes) items */
int finalBitPackage_1 = 0;
int finalBitPackage_2 = 0;
int finalBitPackage_3 = 0;
int finalBitPackage_4 = 0;
int finalBitPackage_5 = 0;
int finalBitPackage_6 = 0;
int finalBitPackage_7 = 0;
var bitContainer_1 = new BitArray(32, false);
var bitContainer_2 = new BitArray(32, false);
var bitContainer_3 = new BitArray(32, false);
var bitContainer_4 = new BitArray(32, false);
var bitContainer_5 = new BitArray(32, false);
var bitContainer_6 = new BitArray(32, false);
var bitContainer_7 = new BitArray(32, false);
string hexValue = String.Empty;
...
*assign 32 bits (from bools) to every bitContainer[] here*
...
/* Using this single 1-D array for all assignments works because as soon as we convert arrays,
we store the result; this way we never overwrite ourselves */
int[] data = new int[1];
/* Copy containers to a 1-dimensional array, then into an Int for transmission */
bitContainer_1.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_1 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_2.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_2 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_3.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_3 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_4.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_4 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_5.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_5 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_6.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_6 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_7.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_7 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
From what I've learned so far, if a binary value is being converted to Int32, the first digit tells if it will be -/+, where 1 indicates (-) and 0 indicates (+); however, in my bitArrays that start with a 0, they show up as a negative number when I do the CopyTo(int[]) transaction, and the bitArrays that start with a 1 show up as a positive when they are copied.
In addition, there is the problem of converting them from their Int32 values into Hex values. Any values that come out of the array conversion as negative don't get the 8 F's added to the front as when checked by http://www.binaryhexconverter.com/, so I wasn't sure the difference in that since my Hex knowledge is limited and I didn't want to lose meaningful data when I transmit the data to another system (over TCP/IP if it matters to anyone). I'll post the values I'm getting out of everything below to help clear it up some.
Variable Binary Int32[] My Hex
bitContainer_1 "01010101010101010101010101010101" "-1431655766" AAAAAAAA
bitContainer_2 "10101010101010101010101010101010" "1431655765" 55555555
bitContainer_3 "00110011001100110011001100110011" "-858993460" CCCCCCCC
bitContainer_4 "11001100110011001100110011001100" "858993459" 33333333
bitContainer_5 "11100011100011100011100011100011" "-954437177" C71C71C7
bitContainer_6 "00011100011100011100011100011100" "954437176" 38E38E38
bitContainer_7 "11110000111100001111000011110000" "252645135" F0F0F0F
Online Hex Values:
FFFFFFFFAAAAAAAA
555555555
FFFFFFFFCCCCCCCC
33333333
FFFFFFFFC71C71C7
38E38E38
F0F0F0F
If every integer sign value is reversed place a -1*theIntegerValue to un-reverse it. It could also have something to do with when you're calling toStirng("X"), maybe use a blank string?

confusion with result of Ushort

consider the following code :
ushort a = 60000;
a = (ushort)(a * a / a);
Console.WriteLine("A = " + a);
//This prints 53954. Why??
and
ushort a = 40000;
a = (ushort)(a * a / a);
Console.WriteLine("a = " + a.ToString());
//This prints 40000. how??
any help appreciable ...
Because 60000^2 is 3600000000 but the biggest number an int can hold is 2,147,483,647, so it starts over from -2,147,483,648.
A ushort can hold 65,535 and then starts over from 0:
For instance, this prints 0:
ushort myShort = 65535;
myShort++;
Console.WriteLine(myShort); //0
It's easier to see this if you break it into steps:
var B = A * A;
That actually exceeds the capacity of an int32, so it starts from -2,147,483,648 thus b equals -694967296
Then when you split B/A you get: -11582 which, when cast into a ushort becomes 53954.
ushort A = 60000;
var B = A * A; //-694967296
var C = B / A; //-11582
ushort D = (ushort)(C); //53954
The reason that 40000 works is that it does not exceed the capacity of an int32.
ushort A = 40000;
var B = A * A; //1600000000
var C = B / A; //40000
ushort D = (ushort)(C); //40000
uint can hold 60000^2 though, so this works:
ushort A = 60000;
var B = (uint)A * A; //3600000000
var C = B / A; //60000
ushort D = (ushort)(C); //60000
The reason that casting C to ushort yeilds 53954 is because the bytes of C is:
96
234
0
0
And the bytes of D is:
96
234
So they hold the same backing bytes, that's why you get 53954 and -11582
Because it's equivalent to
A * A = -694967296 because the result ends up as an int, and overflow on the short gives a bit pattern that yields this negative result. Ultimately 60000 * 60000 can't be stored in a ushort. Add a watch in debug mode and you'll see this.
Then you have
-694967296 / 60000 - which yields -11582 as an int, but when cast to a ushort yields 53954 - again because of the underlying bit pattern.
Really, this code would need to be in a checked block because it's for this very reason that overflow errors cause massive issues.
First good Question! Now let me tell you one thing If you try 40000 it will work fine.
the reason is that (40000 ^ 2) that is the highest limit of ushort so it will convert into integer so it will not truncate !
If you use 60000 it will ! Due to limit restriction !
Try it with 40000 !
Hope you get my answer

conversion BigDecimal Java to c#-like Decimal

In Java BigDecimal class contains values as A*pow(10,B) where A is 2's complement which non-fix bit length and B is 32bit integer.
In C# Decimal contains values as pow (–1,s) × c × pow(10,-e) where the sign s is 0 or 1, the coefficient c is given by 0 ≤ c < pow(2,96) , and the scale e is such that 0 ≤ e ≤ 28 .
And i want to convert Java BigDecimal to Something like c# Decimal in JAVA.
Can you help me .
I have some thing like this
class CS_likeDecimal
{
private int hi;// for 32bit most sinificant bit of c
private int mid;// for 32bit in the middle
private int lo;// for 32 bit the last sinificant bit
.....
public CS_likeDecimal(BigDecimal data)
{
....
}
}
In fact I found this What's the best way to represent System.Decimal in Protocol Buffers?.
it a protocol buffer for send c# decimal ,but in the protobuff-net project use this to send message between c# (but i want between c# and JAVA)
message Decimal {
optional uint64 lo = 1; // the first 64 bits of the underlying value
optional uint32 hi = 2; // the last 32 bis of the underlying value
optional sint32 signScale = 3; // the number of decimal digits, and the sign
}
Thanks,
the Decimal I use in protobuf-net is primarily intended to support the likely usage of protobuf-net being used at both ends of the pipe, which supports a fixed range. It sounds like the range of the two types in discussion is not the same, so: are not robustly compatible.
I would suggest explicitly using an alternative representation. I don't know what representations are available to Java's BigDecimal - whether there is a pragmatic byte[] version, or a string version.
If you are confident that the scale and range won't be a problem, then it should be possible to fudge between the two layouts with some bit-fiddling.
I needed to write a BigDecimal to/from .Net Decimal converter.
Using this reference:
http://msdn.microsoft.com/en-us/library/system.decimal.getbits.aspx
I wrote this code that could works:
public static byte[] BigDecimalToNetDecimal(BigDecimal paramBigDecimal) throws IllegalArgumentException
{
// .Net Decimal target
byte[] result = new byte[16];
// Unscaled absolute value
BigInteger unscaledInt = paramBigDecimal.abs().unscaledValue();
int bitLength = unscaledInt.bitLength();
if (bitLength > 96)
throw new IllegalArgumentException("BigDecimal too big for .Net Decimal");
// Byte array
byte[] unscaledBytes = unscaledInt.toByteArray();
int unscaledFirst = 0;
if (unscaledBytes[0] == 0)
unscaledFirst = 1;
// Scale
int scale = paramBigDecimal.scale();
if (scale > 28)
throw new IllegalArgumentException("BigDecimal scale exceeds .Net Decimal limit of 28");
result[1] = (byte)scale;
// Copy unscaled value to bytes 8-15
for (int pSource = unscaledBytes.length - 1, pTarget = 15; (pSource >= unscaledFirst) && (pTarget >= 4); pSource--, pTarget--)
{
result[pTarget] = unscaledBytes[pSource];
}
// Signum at byte 0
if (paramBigDecimal.signum() < 0)
result[0] = -128;
return result;
}
public static BigDecimal NetDecimalToBigDecimal(byte[] paramNetDecimal)
{
int scale = paramNetDecimal[1];
int signum = paramNetDecimal[0] >= 0 ? 1 : -1;
byte[] magnitude = new byte[12];
for (int ptr = 0; ptr < 12; ptr++) magnitude[ptr] = paramNetDecimal[ptr + 4];
BigInteger unscaledInt = new BigInteger(signum, magnitude);
return new BigDecimal(unscaledInt, scale);
}

Getting upper and lower byte of an integer in C# and putting it as a char array to send to a com port, how?

In C I would do this
int number = 3510;
char upper = number >> 8;
char lower = number && 8;
SendByte(upper);
SendByte(lower);
Where upper and lower would both = 54
In C# I am doing this:
int number = Convert.ToInt16("3510");
byte upper = byte(number >> 8);
byte lower = byte(number & 8);
char upperc = Convert.ToChar(upper);
char lowerc = Convert.ToChar(lower);
data = "GETDM" + upperc + lowerc;
comport.Write(data);
However in the debugger number = 3510, upper = 13 and lower = 0
this makes no sense, if I change the code to >> 6 upper = 54 which is absolutely strange.
Basically I just want to get the upper and lower byte from the 16 bit number, and send it out the com port after "GETDM"
How can I do this? It is so simple in C, but in C# I am completely stumped.
Your masking is incorrect - you should be masking against 255 (0xff) instead of 8. Shifting works in terms of "bits to shift by" whereas bitwise and/or work against the value to mask against... so if you want to only keep the bottom 8 bits, you need a mask which just has the bottom 8 bits set - i.e. 255.
Note that if you're trying to split a number into two bytes, it should really be a short or ushort to start with, not an int (which has four bytes).
ushort number = Convert.ToUInt16("3510");
byte upper = (byte) (number >> 8);
byte lower = (byte) (number & 0xff);
Note that I've used ushort here instead of byte as bitwise arithmetic is easier to think about when you don't need to worry about sign extension. It wouldn't actually matter in this case due to the way the narrowing conversion to byte works, but it's the kind of thing you should be thinking about.
You probably want to and it with 0x00FF
byte lower = Convert.ToByte(number & 0x00FF);
Full example:
ushort number = Convert.ToUInt16("3510");
byte upper = Convert.ToByte(number >> 8);
byte lower = Convert.ToByte(number & 0x00FF);
char upperc = Convert.ToChar(upper);
char lowerc = Convert.ToChar(lower);
data = "GETDM" + upperc + lowerc;
Even if the accepted answer fits the question, I consider it incomplete due to the simple fact that the question contains int and not short in header and it is misleading in search results, and as we know Int32 in C# has 32 bits and thus 4 bytes. I will post here an example that will be useful in the case of Int32 use. In the case of an Int32 we have:
LowWordLowByte
LowWordHighByte
HighWordLowByte
HighWordHighByte.
And as such, I have created the following method for converting the Int32 value into a little endian Hex string, in which every byte is separated from the others by a Whitespace. This is useful when you transmit data and want the receiver to do the processing faster, he can just Split(" ") and get the bytes represented as standalone hex strings.
public static String IntToLittleEndianWhitespacedHexString(int pValue, uint pSize)
{
String result = String.Empty;
pSize = pSize < 4 ? pSize : 4;
byte tmpByte = 0x00;
for (int i = 0; i < pSize; i++)
{
tmpByte = (byte)((pValue >> i * 8) & 0xFF);
result += tmpByte.ToString("X2") + " ";
}
return result.TrimEnd(' ');
}
Usage:
String value1 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x927C, 4);
String value2 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x3FFFF, 4);
String value3 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x927C, 2);
String value4 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x3FFFF, 1);
The result is:
7C 92 00 00
FF FF 03 00
7C 92
FF.
If it is hard to understand the method which I created, then the following might be a more comprehensible one:
public static String IntToLittleEndianWhitespacedHexString(int pValue)
{
String result = String.Empty;
byte lowWordLowByte = (byte)(pValue & 0xFF);
byte lowWordHighByte = (byte)((pValue >> 8) & 0xFF);
byte highWordLowByte = (byte)((pValue >> 16) & 0xFF);
byte highWordHighByte = (byte)((pValue >> 24) & 0xFF);
result = lowWordLowByte.ToString("X2") + " " +
lowWordHighByte.ToString("X2") + " " +
highWordLowByte.ToString("X2") + " " +
highWordHighByte.ToString("X2");
return result;
}
Remarks:
Of course insteand of uint pSize there can be an enum specifying Byte, Word, DoubleWord
Instead of converting to hex string and creating the little endian string, you can convert to chars and do whatever you want to do.
Hope this will help someone!
Shouldn't it be:
byte lower = (byte) ( number & 0xFF );
To be a little more creative
[System.Runtime.InteropServices.StructLayout( System.Runtime.InteropServices.LayoutKind.Explicit )]
public struct IntToBytes {
[System.Runtime.InteropServices.FieldOffset(0)]
public int Int32;
[System.Runtime.InteropServices.FieldOffset(0)]
public byte First;
[System.Runtime.InteropServices.FieldOffset(1)]
public byte Second;
[System.Runtime.InteropServices.FieldOffset(2)]
public byte Third;
[System.Runtime.InteropServices.FieldOffset(3)]
public byte Fourth;
}

Improving method to read signed 8-bit integers from hexadecimal

Scenario:
I have a string of hexadecimal characters which encode 8-bit signed integers. Each two characters represent a byte which employ the leftmost (MSB) bit as the sign (rather than two's complement). I am converting these to signed ints within a loop and wondered if there's a better way to do it. There are too many conversions and I am sure there's a more efficient method that I am missing.
Current Code:
string strData = "FFC000407F"; // example input data, encodes: -127, -64, 0, 64, 127
int v;
for (int x = 0; x < strData.Length/2; x++)
{
v = HexToInt(strData.Substring(x * 2, 2));
Console.WriteLine(v); // do stuff with v
}
private int HexToInt(string _hexData)
{
string strBinary = Convert.ToString(Convert.ToInt32(_hexData, 16), 2).PadLeft(_hexData.Length * 4, '0');
int i = Convert.ToInt32(strBinary.Substring(1, 7), 2);
i = (strBinary.Substring(0, 1) == "0" ? i : -i);
return i;
}
Question:
Is there a more streamlined and direct approach to reading two hex characters and converting them to an int when they represent a signed int (-127 to 127) using the leftmost bit as the sign?
Just covert it to an int and handle the sign bit by testing the size of the converted number and masking off the sign bit.
private int HexToInt(string _hexData)
{
int number = Convert.ToInt32(_hexData, 16);
if (number >= 0x80)
return -(number & 0x7F);
return number;
}
Like this: (Tested)
(int)unchecked((sbyte)Convert.ToByte("FF", 16))
Explanation:
The unchecked cast to sbyte will perform a direct cast to a signed byte, interpreting the final bit as a sign bit.
However, it has a different range, so it won't help you.
sbyte SignAndMagnitudeToTwosComplement(byte b)
{
var isNegative = ((b & 0x80) >> 7);
return (sbyte)((b ^ 0x7F * isNegative) + isNegative);
}
Then:
sbyte ReadSignAndMagnitudeByte(string hex)
{
return SignAndMagnitudeToTwosComplement(Convert.ToByte(hex,16));
}

Categories

Resources