Binary, Int32, and Hex conversions inquiry - c#

The following code is a work-in-progress that I am also taking time with to try and learn some more about converting between bits, hex, and Int;
A lot of this is obviously repetitive operations since we're doing the same thing to 7 different "packages," so feel free to gloss over the repeats (just wanted to have entire code structure up to maybe answer some questions ahead of time).
/* Pack bits into containers to send them as 32-bit (4 bytes) items */
int finalBitPackage_1 = 0;
int finalBitPackage_2 = 0;
int finalBitPackage_3 = 0;
int finalBitPackage_4 = 0;
int finalBitPackage_5 = 0;
int finalBitPackage_6 = 0;
int finalBitPackage_7 = 0;
var bitContainer_1 = new BitArray(32, false);
var bitContainer_2 = new BitArray(32, false);
var bitContainer_3 = new BitArray(32, false);
var bitContainer_4 = new BitArray(32, false);
var bitContainer_5 = new BitArray(32, false);
var bitContainer_6 = new BitArray(32, false);
var bitContainer_7 = new BitArray(32, false);
string hexValue = String.Empty;
...
*assign 32 bits (from bools) to every bitContainer[] here*
...
/* Using this single 1-D array for all assignments works because as soon as we convert arrays,
we store the result; this way we never overwrite ourselves */
int[] data = new int[1];
/* Copy containers to a 1-dimensional array, then into an Int for transmission */
bitContainer_1.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_1 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_2.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_2 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_3.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_3 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_4.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_4 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_5.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_5 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_6.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_6 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_7.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_7 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
From what I've learned so far, if a binary value is being converted to Int32, the first digit tells if it will be -/+, where 1 indicates (-) and 0 indicates (+); however, in my bitArrays that start with a 0, they show up as a negative number when I do the CopyTo(int[]) transaction, and the bitArrays that start with a 1 show up as a positive when they are copied.
In addition, there is the problem of converting them from their Int32 values into Hex values. Any values that come out of the array conversion as negative don't get the 8 F's added to the front as when checked by http://www.binaryhexconverter.com/, so I wasn't sure the difference in that since my Hex knowledge is limited and I didn't want to lose meaningful data when I transmit the data to another system (over TCP/IP if it matters to anyone). I'll post the values I'm getting out of everything below to help clear it up some.
Variable Binary Int32[] My Hex
bitContainer_1 "01010101010101010101010101010101" "-1431655766" AAAAAAAA
bitContainer_2 "10101010101010101010101010101010" "1431655765" 55555555
bitContainer_3 "00110011001100110011001100110011" "-858993460" CCCCCCCC
bitContainer_4 "11001100110011001100110011001100" "858993459" 33333333
bitContainer_5 "11100011100011100011100011100011" "-954437177" C71C71C7
bitContainer_6 "00011100011100011100011100011100" "954437176" 38E38E38
bitContainer_7 "11110000111100001111000011110000" "252645135" F0F0F0F
Online Hex Values:
FFFFFFFFAAAAAAAA
555555555
FFFFFFFFCCCCCCCC
33333333
FFFFFFFFC71C71C7
38E38E38
F0F0F0F

If every integer sign value is reversed place a -1*theIntegerValue to un-reverse it. It could also have something to do with when you're calling toStirng("X"), maybe use a blank string?

Related

how to store byte string in byte[] using EntityFramework

I have
0x4D5A90000300000004000000FFFF0000B80000000000000040...
generated from sql server.
How can I insert byte string into byte[] column in database using EntityFramework?
As per my comment above, I strongly suspect that the best thing to do here is to return the data as a byte[] from the server; this should be fine and easy to do. However, if you have to use a string, then you'll need to parse it out - take off the 0x prefix, divide the length by 2 to get the number of bytes, then loop and parse each 2-character substring using Convert.ToByte(s, 16) in turn. Something like (completely untested):
int len = (value.Length / 2)-1;
var arr = new byte[len];
for(int i = 0; i < len;i++) {
var s = value.Substring((i + 1) * 2, 2);
arr[i] = Convert.ToByte(s, 16);
}

How do i create byte array that contains 64 bits array and how do i convert those bits into hex value?

I want to create byte array that contains 64 bits, How can i get particular bits values say 17th bit, and also how can i get hex value of that index of byte? I did like this, Is this correct?
byte[] _byte = new byte[8];
var bit17=((((_byte[2]>>1)& 0x01);
string hex=BitConverter.ToString(_byte,2,4).Replace("-", string.Empty)
You could use a BitArray:
var bits = new BitArray(64);
bool bit17 = bits[17];
I'm not sure what you mean by the "hex value of that bit" - it will be 0 or 1, because it's a bit.
If you have the index of a bit in a byte (between 0 and 7 inclusive) then you can convert that to a hex string as follows:
int bitNumber = 7; // For example.
byte value = (byte)(1 << bitNumber);
string hex = value.ToString("x");
Console.WriteLine(hex);
You can just use ToString() method.
byte[] arr= new byte[8];
int index = 0;
string hexValue = arr[index].ToString("X");

How to reverse byte array using pointers

Working on Windows OS and C# (means little endian) so no need for extra checks.
How can I increase speed of reversing an array of bytes using pointers?
Instead of a normal for loop:
const int value = 133;
var bArr = new byte[]{ 0, 0, 0 , value };
int Len = bArr.Length;
var rrAb = new byt[Len];
for(int idx=0; idx<bArr.Length;idx++)
rrAb[idx] = bArr[--Len];
String reverse using pointers

expressing hex value in 2's complement

i have a string hex value, and i need to express it in 2's complement.
string hx = "FF00";
what i did is, converting it to binary:
string h = Convert.ToString(Convert.ToInt32(hx, 16), 2 );
then inverting it, but i couldn't use the NOT operator.
is there any short way to invert the bits and then adding 1 (2's complement operation)?
The answer might depend on whether or not the bit width of the value is important to you.
The short answer is:
string hx = "FF00";
uint intVal = Convert.ToUInt32(hx, 16); // intVal == 65280
uint twosComp = ~v + 1; // twosComp == 4294902016
string h = string.Format("{0:X}", twosComp); // h == "FFFF0100"
The value of h is then "FFFF0100" which is the 32-bit 2's complement of hx. If you were expecting '100' then you need to use 16-bit calculations:
string hx = "FF00";
ushort intVal = Convert.ToUInt16(hx, 16); // intVal = 65280
ushort twosComp = (ushort)(~v + 1); // twosComp = 256
string h = string.Format("{0:X}", twosComp); // h = "100"
Bear in mind that uint is an alias for UInt32 and ushort aliases the UInt16 type. For clarity in this type of operation you'd probably be better off using the explicit names.
Two complment is really simple:
int value = 100;
value = ~value // NOT
value = value + 1;
//Now value is -100
Remember that a two complement system requires inverting and adding plus 1.
In hex:
int value = 0x45;
value = ~value // NOT
value = value + 1;

Converting 2 bytes to Short in C#

I'm trying to convert two bytes into an unsigned short so I can retrieve the actual server port value. I'm basing it off from this protocol specification under Reply Format. I tried using BitConverter.ToUint16() for this, but the problem is, it doesn't seem to throw the expected value. See below for a sample implementation:
int bytesRead = 0;
while (bytesRead < ms.Length)
{
int first = ms.ReadByte() & 0xFF;
int second = ms.ReadByte() & 0xFF;
int third = ms.ReadByte() & 0xFF;
int fourth = ms.ReadByte() & 0xFF;
int port1 = ms.ReadByte();
int port2 = ms.ReadByte();
int actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port1 , (byte)port2 }, 0);
string ip = String.Format("{0}.{1}.{2}.{3}:{4}-{5} = {6}", first, second, third, fourth, port1, port2, actualPort);
Debug.WriteLine(ip);
bytesRead += 6;
}
Given one sample data, let's say for the two byte values, I have 105 & 135, the expected port value after conversion should be 27015, but instead I get a value of 34665 using BitConverter.
Am I doing it the wrong way?
If you reverse the values in the BitConverter call, you should get the expected result:
int actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port2 , (byte)port1 }, 0);
On a little-endian architecture, the low order byte needs to be second in the array. And as lasseespeholt points out in the comments, you would need to reverse the order on a big-endian architecture. That could be checked with the BitConverter.IsLittleEndian property. Or it might be a better solution overall to use IPAddress.HostToNetworkOrder (convert the value first and then call that method to put the bytes in the correct order regardless of the endianness).
BitConverter is doing the right thing, you just have low-byte and high-byte mixed up - you can verify using a bitshift manually:
byte port1 = 105;
byte port2 = 135;
ushort value = BitConverter.ToUInt16(new byte[2] { (byte)port1, (byte)port2 }, 0);
ushort value2 = (ushort)(port1 + (port2 << 8)); //same output
To work on both little and big endian architectures, you must do something like:
if (BitConverter.IsLittleEndian)
actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port2 , (byte)port1 }, 0);
else
actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port1 , (byte)port2 }, 0);

Categories

Resources