Converting 2 bytes to Short in C# - c#

I'm trying to convert two bytes into an unsigned short so I can retrieve the actual server port value. I'm basing it off from this protocol specification under Reply Format. I tried using BitConverter.ToUint16() for this, but the problem is, it doesn't seem to throw the expected value. See below for a sample implementation:
int bytesRead = 0;
while (bytesRead < ms.Length)
{
int first = ms.ReadByte() & 0xFF;
int second = ms.ReadByte() & 0xFF;
int third = ms.ReadByte() & 0xFF;
int fourth = ms.ReadByte() & 0xFF;
int port1 = ms.ReadByte();
int port2 = ms.ReadByte();
int actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port1 , (byte)port2 }, 0);
string ip = String.Format("{0}.{1}.{2}.{3}:{4}-{5} = {6}", first, second, third, fourth, port1, port2, actualPort);
Debug.WriteLine(ip);
bytesRead += 6;
}
Given one sample data, let's say for the two byte values, I have 105 & 135, the expected port value after conversion should be 27015, but instead I get a value of 34665 using BitConverter.
Am I doing it the wrong way?

If you reverse the values in the BitConverter call, you should get the expected result:
int actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port2 , (byte)port1 }, 0);
On a little-endian architecture, the low order byte needs to be second in the array. And as lasseespeholt points out in the comments, you would need to reverse the order on a big-endian architecture. That could be checked with the BitConverter.IsLittleEndian property. Or it might be a better solution overall to use IPAddress.HostToNetworkOrder (convert the value first and then call that method to put the bytes in the correct order regardless of the endianness).

BitConverter is doing the right thing, you just have low-byte and high-byte mixed up - you can verify using a bitshift manually:
byte port1 = 105;
byte port2 = 135;
ushort value = BitConverter.ToUInt16(new byte[2] { (byte)port1, (byte)port2 }, 0);
ushort value2 = (ushort)(port1 + (port2 << 8)); //same output

To work on both little and big endian architectures, you must do something like:
if (BitConverter.IsLittleEndian)
actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port2 , (byte)port1 }, 0);
else
actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port1 , (byte)port2 }, 0);

Related

Unity C# Convert float to byte array and read with node js

Can someone explain how I can convert a float(Vector3.x) to a byte array with c# and decode it with node js?
I read on the internet that Vector3.x is a system.single data type and use 4 bytes(32 bits). I use BitConverter to convert it to a byte array. With Nodejs I use readFloatBE().
I don`t know what I'm doing wrong, but I get constantly a bad result with node js with console.log().
Unity csharp:
public static int FloatToBit(int offset, ref byte[] data, Single number)
{
byte[] byteArray = System.BitConverter.GetBytes(number);
for (int i = 0;i<4;i++)
{
data[offset + i] = byteArray[i];
}
return 4;
}
Node js
readFloat: function (offset, data) {
var b = new Buffer(4);
for (var i = 0; i < 4; i++) {
b[i] = data[offset + i];
}
return data.readFloatLE(b, 0);
},
If I send -2.5, unity output is: 0 0 32 191 with -1 unity output is: 0 0 128 192
Nodejs output with readFloatLE: 3.60133705331478e-43
Here's a working set of data from front to back.
C#:
Single fl = 2.5F;
var bytes = System.BitConverter.GetBytes(fl);
var str = BitConverter.ToString(bytes); // 00-00-20-40
Nodejs:
let buffer = Buffer.from([ 0x00, 0x00, 0x20, 0x40 ]);
let float = buffer.readFloatLE(); // 2.5
Note the method I used to create the buffer in nodejs, especially (also tested and verified with -1, but I left the code off for brevity).
Thanks for the replies.
I put the same question over here:Unity Questions
Somebody give me the answer to use:
readFloat: function (offset, data) {
return data.readFloatLE(offset);
},
instead creating a new buffer.
Parameter is a buffer. This was working for me. I still don`t understand why my example is not working.

Binary, Int32, and Hex conversions inquiry

The following code is a work-in-progress that I am also taking time with to try and learn some more about converting between bits, hex, and Int;
A lot of this is obviously repetitive operations since we're doing the same thing to 7 different "packages," so feel free to gloss over the repeats (just wanted to have entire code structure up to maybe answer some questions ahead of time).
/* Pack bits into containers to send them as 32-bit (4 bytes) items */
int finalBitPackage_1 = 0;
int finalBitPackage_2 = 0;
int finalBitPackage_3 = 0;
int finalBitPackage_4 = 0;
int finalBitPackage_5 = 0;
int finalBitPackage_6 = 0;
int finalBitPackage_7 = 0;
var bitContainer_1 = new BitArray(32, false);
var bitContainer_2 = new BitArray(32, false);
var bitContainer_3 = new BitArray(32, false);
var bitContainer_4 = new BitArray(32, false);
var bitContainer_5 = new BitArray(32, false);
var bitContainer_6 = new BitArray(32, false);
var bitContainer_7 = new BitArray(32, false);
string hexValue = String.Empty;
...
*assign 32 bits (from bools) to every bitContainer[] here*
...
/* Using this single 1-D array for all assignments works because as soon as we convert arrays,
we store the result; this way we never overwrite ourselves */
int[] data = new int[1];
/* Copy containers to a 1-dimensional array, then into an Int for transmission */
bitContainer_1.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_1 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_2.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_2 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_3.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_3 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_4.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_4 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_5.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_5 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_6.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_6 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_7.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_7 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
From what I've learned so far, if a binary value is being converted to Int32, the first digit tells if it will be -/+, where 1 indicates (-) and 0 indicates (+); however, in my bitArrays that start with a 0, they show up as a negative number when I do the CopyTo(int[]) transaction, and the bitArrays that start with a 1 show up as a positive when they are copied.
In addition, there is the problem of converting them from their Int32 values into Hex values. Any values that come out of the array conversion as negative don't get the 8 F's added to the front as when checked by http://www.binaryhexconverter.com/, so I wasn't sure the difference in that since my Hex knowledge is limited and I didn't want to lose meaningful data when I transmit the data to another system (over TCP/IP if it matters to anyone). I'll post the values I'm getting out of everything below to help clear it up some.
Variable Binary Int32[] My Hex
bitContainer_1 "01010101010101010101010101010101" "-1431655766" AAAAAAAA
bitContainer_2 "10101010101010101010101010101010" "1431655765" 55555555
bitContainer_3 "00110011001100110011001100110011" "-858993460" CCCCCCCC
bitContainer_4 "11001100110011001100110011001100" "858993459" 33333333
bitContainer_5 "11100011100011100011100011100011" "-954437177" C71C71C7
bitContainer_6 "00011100011100011100011100011100" "954437176" 38E38E38
bitContainer_7 "11110000111100001111000011110000" "252645135" F0F0F0F
Online Hex Values:
FFFFFFFFAAAAAAAA
555555555
FFFFFFFFCCCCCCCC
33333333
FFFFFFFFC71C71C7
38E38E38
F0F0F0F
If every integer sign value is reversed place a -1*theIntegerValue to un-reverse it. It could also have something to do with when you're calling toStirng("X"), maybe use a blank string?

Getting upper and lower byte of an integer in C# and putting it as a char array to send to a com port, how?

In C I would do this
int number = 3510;
char upper = number >> 8;
char lower = number && 8;
SendByte(upper);
SendByte(lower);
Where upper and lower would both = 54
In C# I am doing this:
int number = Convert.ToInt16("3510");
byte upper = byte(number >> 8);
byte lower = byte(number & 8);
char upperc = Convert.ToChar(upper);
char lowerc = Convert.ToChar(lower);
data = "GETDM" + upperc + lowerc;
comport.Write(data);
However in the debugger number = 3510, upper = 13 and lower = 0
this makes no sense, if I change the code to >> 6 upper = 54 which is absolutely strange.
Basically I just want to get the upper and lower byte from the 16 bit number, and send it out the com port after "GETDM"
How can I do this? It is so simple in C, but in C# I am completely stumped.
Your masking is incorrect - you should be masking against 255 (0xff) instead of 8. Shifting works in terms of "bits to shift by" whereas bitwise and/or work against the value to mask against... so if you want to only keep the bottom 8 bits, you need a mask which just has the bottom 8 bits set - i.e. 255.
Note that if you're trying to split a number into two bytes, it should really be a short or ushort to start with, not an int (which has four bytes).
ushort number = Convert.ToUInt16("3510");
byte upper = (byte) (number >> 8);
byte lower = (byte) (number & 0xff);
Note that I've used ushort here instead of byte as bitwise arithmetic is easier to think about when you don't need to worry about sign extension. It wouldn't actually matter in this case due to the way the narrowing conversion to byte works, but it's the kind of thing you should be thinking about.
You probably want to and it with 0x00FF
byte lower = Convert.ToByte(number & 0x00FF);
Full example:
ushort number = Convert.ToUInt16("3510");
byte upper = Convert.ToByte(number >> 8);
byte lower = Convert.ToByte(number & 0x00FF);
char upperc = Convert.ToChar(upper);
char lowerc = Convert.ToChar(lower);
data = "GETDM" + upperc + lowerc;
Even if the accepted answer fits the question, I consider it incomplete due to the simple fact that the question contains int and not short in header and it is misleading in search results, and as we know Int32 in C# has 32 bits and thus 4 bytes. I will post here an example that will be useful in the case of Int32 use. In the case of an Int32 we have:
LowWordLowByte
LowWordHighByte
HighWordLowByte
HighWordHighByte.
And as such, I have created the following method for converting the Int32 value into a little endian Hex string, in which every byte is separated from the others by a Whitespace. This is useful when you transmit data and want the receiver to do the processing faster, he can just Split(" ") and get the bytes represented as standalone hex strings.
public static String IntToLittleEndianWhitespacedHexString(int pValue, uint pSize)
{
String result = String.Empty;
pSize = pSize < 4 ? pSize : 4;
byte tmpByte = 0x00;
for (int i = 0; i < pSize; i++)
{
tmpByte = (byte)((pValue >> i * 8) & 0xFF);
result += tmpByte.ToString("X2") + " ";
}
return result.TrimEnd(' ');
}
Usage:
String value1 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x927C, 4);
String value2 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x3FFFF, 4);
String value3 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x927C, 2);
String value4 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x3FFFF, 1);
The result is:
7C 92 00 00
FF FF 03 00
7C 92
FF.
If it is hard to understand the method which I created, then the following might be a more comprehensible one:
public static String IntToLittleEndianWhitespacedHexString(int pValue)
{
String result = String.Empty;
byte lowWordLowByte = (byte)(pValue & 0xFF);
byte lowWordHighByte = (byte)((pValue >> 8) & 0xFF);
byte highWordLowByte = (byte)((pValue >> 16) & 0xFF);
byte highWordHighByte = (byte)((pValue >> 24) & 0xFF);
result = lowWordLowByte.ToString("X2") + " " +
lowWordHighByte.ToString("X2") + " " +
highWordLowByte.ToString("X2") + " " +
highWordHighByte.ToString("X2");
return result;
}
Remarks:
Of course insteand of uint pSize there can be an enum specifying Byte, Word, DoubleWord
Instead of converting to hex string and creating the little endian string, you can convert to chars and do whatever you want to do.
Hope this will help someone!
Shouldn't it be:
byte lower = (byte) ( number & 0xFF );
To be a little more creative
[System.Runtime.InteropServices.StructLayout( System.Runtime.InteropServices.LayoutKind.Explicit )]
public struct IntToBytes {
[System.Runtime.InteropServices.FieldOffset(0)]
public int Int32;
[System.Runtime.InteropServices.FieldOffset(0)]
public byte First;
[System.Runtime.InteropServices.FieldOffset(1)]
public byte Second;
[System.Runtime.InteropServices.FieldOffset(2)]
public byte Third;
[System.Runtime.InteropServices.FieldOffset(3)]
public byte Fourth;
}

How can I convert an int to byte array without the 0x00 bytes?

I'm trying to convert an int value to a byte array, but I'm using the byte for MIDI information (meaning that the 0x00 byte which is returned when using GetBytes acts as a separator) which renders my MIDI information useless.
I would like to convert the int to an array which leaves out the 0x00 bytes and just contains the bytes which contain actual values. How can I do this?
You've completely misunderstood what you need, but luckily you mentioned MIDI. You need to use the multi-byte encoding that MIDI defines, which is somewhat similar to UTF-8 in that less than 8 bits of data are placed into each octet, with the remaining providing information about the number of bits used.
See the description on wikipedia. Pay close attention to the fact that protobuf uses this encoding, you can probably reuse some of Google's code.
Based on the info Ben added, this should do what you require:
static byte[] VlqEncode(int value)
{
uint uvalue = (uint)value;
if (uvalue < 128) return new byte[] { (byte)uvalue }; // simplest case
// calculate length of buffer required
int len = 0;
do {
len++;
uvalue >>= 7;
} while (uvalue != 0);
// encode (this is untested, following the VQL/Midi/protobuf confusion)
uvalue = (uint)value;
byte[] buffer = new byte[len];
for (int offset = len - 1; offset >= 0; offset--)
{
buffer[offset] = (byte)(128 | (uvalue & 127)); // only the last 7 bits
uvalue >>= 7;
}
buffer[len - 1] &= 127;
return buffer;
}

How can I assign an integer to 3Bytes of field?

I have retrieved the Size of my Struct by using size of like below:
int len = Marshal.SizeOf(packet);
Now the len has a Value of 40. I have to assign this 40 to a 3-byte Field of my Structure.My Strucure looks like below:
public struct TCP_CIFS_Packet
{
public byte zerobyte;
public byte[] lengthCIFSPacket;
public CIFSPacket cifsPacket;
}
I tried assigning the values like following:
tcpCIFSPacket.lengthCIFSPacket = new byte[3];
tcpCIFSPacket.lengthCIFSPacket[0] = Convert.ToByte(0);
tcpCIFSPacket.lengthCIFSPacket[1] = Convert.ToByte(0);
tcpCIFSPacket.lengthCIFSPacket[2] = Convert.ToByte(40);
But this doesn't seem to be the right way. Is there any other Way I can do this?
Edit #ho1 and #Rune Grimstad:
After using BitConverter.GetBytes like follwoing:
tcpCIFSPacket.lengthCIFSPacket = BitConverter.GetBytes(lengthofPacket);
The size of lengthCIFSPacket changes to 4-bytes but I have only 3-bytes of space for tcpCIFSPacket.lengthCIFSPacket as the packet structure.
int number = 500000;
byte[] bytes = new byte[3];
bytes[0] = (byte)((number & 0xFF) >> 0);
bytes[1] = (byte)((number & 0xFF00) >> 8);
bytes[2] = (byte)((number & 0xFF0000) >> 16);
or
byte[] bytes = BitConverter.GetBytes(number); // this will return 4 bytes of course
edit: you can also do this
byte[] bytes = BitConverter.GetBytes(number);
tcpCIFSPacket.lengthCIFSPacket = new byte[3];
tcpCIFSPacket.lengthCIFSPacket[0] = bytes[0];
tcpCIFSPacket.lengthCIFSPacket[1] = bytes[1];
tcpCIFSPacket.lengthCIFSPacket[2] = bytes[2];
Look at BitConverter.GetBytes. It'll convert the int to an array of bytes. See here for more info.
You can use the BitConverter class to convert an Int32 to an array of bytes using the GetBytes method.

Categories

Resources