Byte conversion to INT64, under the hood - c#

Good day. For a current project I need to know how datatypes are represented as bytes. For example, if I use :
long three = 500;var bytes = BitConverter.GetBytes(three);
I get the values 244,1,0,0,0,0,0,0. I get that it is a 64 bit value, and 8 bits go int a bit, thus are there 8 bytes. But how does 244 and 1 makeup 500? I tried Googling it, but all I get is use BitConverter. I need to know how the bitconverter works under the hood. If anybody can perhaps point me to an article or explain how this stuff works, it would be appreciated.

It's quite simple.
BitConverter.GetBytes((long)1); // {1,0,0,0,0,0,0,0};
BitConverter.GetBytes((long)10); // {10,0,0,0,0,0,0,0};
BitConverter.GetBytes((long)100); // {100,0,0,0,0,0,0,0};
BitConverter.GetBytes((long)255); // {255,0,0,0,0,0,0,0};
BitConverter.GetBytes((long)256); // {0,1,0,0,0,0,0,0}; this 1 is 256
BitConverter.GetBytes((long)500); // {244,1,0,0,0,0,0,0}; this is yours 500 = 244 + 1 * 256
If you need source code you should check Microsoft GitHub since implementation is open source :)
https://github.com/dotnet

From the source code:
// Converts a long into an array of bytes with length
// eight.
[System.Security.SecuritySafeCritical] // auto-generated
public unsafe static byte[] GetBytes(long value)
{
Contract.Ensures(Contract.Result<byte[]>() != null);
Contract.Ensures(Contract.Result<byte[]>().Length == 8);
byte[] bytes = new byte[8];
fixed(byte* b = bytes)
*((long*)b) = value;
return bytes;
}

Related

C# BitConverter.GetBytes() padding is incorrect?

I am working on writing my own DNS server in .net core. I'm at the stage where I am encoding the response payload to send back, and the schema shows that most of the numbers are encoded as 16 bit numbers. C#'s ints are 32 bit numbers. Not a big deal, I'm just dropping off the remaining 16 bits from the front of the number I have no problem with that.
I was doing this by hand until I discovered the System.BitConverter class. I tried using it, however, and the results I came up with were reversed of what it came up with.
For example:
using System;
var myInt = 15;
byte[] data = new byte[2];
data[0] = (byte)(myInt >> 8);
data[1] = (byte)(myInt & 255);
var myIntStr = "";
foreach(var b in data)
{
myIntStr += System.Convert.ToHexString(new byte[]{ b });
myIntStr += " ";
}
Console.WriteLine(myIntStr);
var myShort = System.Convert.ToInt16(myInt);
byte[] data2 = System.BitConverter.GetBytes(myShort);
myIntStr = "";
foreach(var b in data2)
{
myIntStr += System.Convert.ToHexString(new byte[]{ b });
myIntStr += " ";
}
Console.WriteLine(myIntStr);
This code produces the following result:
00 0F
0F 00
It's my understanding that 000F is 15 where as 0F00 is 3840. Am I not understanding bit shifting correctly? I literally just started working with actual bits last night lol.
Thanks for reading this and thanks in advance for your help!
As per the comments on the Question, the answer resides in Endianness.
Network byte order sent from the dig command I am using to test with uses Big Endian order. However, my CPU architecture is Small Endian.
Dotnet behind the scenes in their UDPClient class reverses the bytes if your system is Small Endian when sending bytes, and vice verse when receiving bytes. But because I was creating the bytes by hand using bit shifting in the Big Endian format, they were then reversed to be in Non-Network Byte order while everything else was in Network Byte order.
The solution here is to either have conditional logic to test if your system is IsLittleEndian According to the Microsoft dotnet docs, or let the System.BitConverter class handle it for you.
For instance: in my above example I was trying to convert a 32 bit int into a 16 bit unsigned bit. I ended up replacing the above code with:
public static byte[] IntTo16Bit(int input)
{
ushort input16;
if (!UInt16.TryParse(input.ToString(), out input16))
{
throw new Exception($"Input was {input}");
}
if (BitConverter.IsLittleEndian)
{
return BitConverter.GetBytes(input16).Reverse().ToArray();
}
return BitConverter.GetBytes(input16);
}
and plan on better handling when the i32 cannot be converted into a u16.

c# convert 2 bytes into int value

First, I had read many posts and tried BitConverter methods for the conversion, but I haven't got the desired result.
From a 2 byte array of:
byte[] dateArray = new byte[] { 0x07 , 0xE4 };
Y need to get an integer with value 2020. So, the decimal of 0x7E4.
Following method does not returning the desired value,
int i1 = BitConverter.ToInt16(dateArray, 0);
The endianess tells you how numbers are stored on your computer. There are two possibilities: Little endian and big endian.
Big endian means the biggest byte is stored first, i.e. 2020 would become 0x07, 0xE4.
Little endian means the lowest byte is stored first, i.e. 2020 would become 0xE4, 0x07.
Most computers are little endian, hence the other way round a human would expect. With BitConverter.IsLittleEndian, you can check which type of endianess your computer has. Your code would become:
byte[] dateArray = new byte[] { 0x07 , 0xE4 };
if(BitConverter.IsLittleEndian)
{
Array.Reverse(dataArray);
}
int i1 = BitConverter.ToInt16(dateArray, 0);
dateArray[0] << 8 | dateArray[1]

Byte[] to BitArray and back to Byte[]

As the title states, i'm trying to convert a byte array to bit array back to byte array again.
I am aware that Array.CopyTo() takes care of that but the byte array received is not the same as the original one due to how BitArray stores values in LSB.
How do you go about it in C#?
This should do it
static byte[] ConvertToByte(BitArray bits) {
// Make sure we have enough space allocated even when number of bits is not a multiple of 8
var bytes = new byte[(bits.Length - 1) / 8 + 1];
bits.CopyTo(bytes, 0);
return bytes;
}
You can verify it using a simple driver program like below
// test to make sure it works
static void Main(string[] args) {
var bytes = new byte[] { 10, 12, 200, 255, 0 };
var bits = new BitArray(bytes);
var newBytes = ConvertToByte(bits);
if (bytes.SequenceEqual(newBytes))
Console.WriteLine("Successfully converted byte[] to bits and then back to byte[]");
else
Console.WriteLine("Conversion Problem");
}
I know that the OP is aware of the Array.CopyTo solution (which is similar to what I have here), but I don't see why it's causing any Bit order issues. FYI, I am using .NET 4.5.2 to verify it. And hence I have provided the test case to confirm the results
To get a BitArray of byte[] you can simply use the constructor of BitArray:
BitArray bits = new BitArray(bytes);
To get the byte[] of the BitArray there are many possible solutions. I think a very elegant solution is to use the BitArray.CopyTo method. Just create a new array and copy the bits into:
byte[]resultBytes = new byte[(bits.Length - 1) / 8 + 1];
bits.CopyTo(resultBytes, 0);

Bit shifting c# what's going on here?

So I'm curious, what exactly is going on here?
static void SetUInt16 (byte [] bytes, int offset, ushort val)
{
bytes [offset] = (byte) ((val & 0x0ff00) >> 8);
bytes [offset + 1] = (byte) (val & 0x0ff);
}
Basically the idea in this code is to set a 16 bit int into a byte buffer at a specific location, but the problem is I'm trying to emulate it using
using(var ms = new MemoryStream())
using(var w = new BinaryWriter(ms))
{
w.Write((ushort)1);
}
I'm expecting to read 1 but instead I'm getting 256. Is this an endianness issue?
The code writes a 16-bit integer in big-endian order. Upper byte is written first. Not the same thing that BinaryWriter does, it writes in little-endian order.
When you decode the data, are you getting 256 when you expect 1? BinaryWriter.Write uses little-endian encoding, your SetUInt16 method is using big-endian.

Convert 2 bytes to a number

I have a control that has a byte array in it.
Every now and then there are two bytes that tell me some info about number of future items in the array.
So as an example I could have:
...
...
Item [4] = 7
Item [5] = 0
...
...
The value of this is clearly 7.
But what about this?
...
...
Item [4] = 0
Item [5] = 7
...
...
Any idea on what that equates to (as an normal int)?
I went to binary and thought it may be 11100000000 which equals 1792. But I don't know if that is how it really works (ie does it use the whole 8 items for the byte).
Is there any way to know this with out testing?
Note: I am using C# 3.0 and visual studio 2008
BitConverter can easily convert the two bytes in a two-byte integer value:
// assumes byte[] Item = someObject.GetBytes():
short num = BitConverter.ToInt16(Item, 4); // makes a short
// out of Item[4] and Item[5]
A two-byte number has a low and a high byte. The high byte is worth 256 times as much as the low byte:
value = 256 * high + low;
So, for high=0 and low=7, the value is 7. But for high=7 and low=0, the value becomes 1792.
This of course assumes that the number is a simple 16-bit integer. If it's anything fancier, the above won't be enough. Then you need more knowledge about how the number is encoded, in order to decode it.
The order in which the high and low bytes appear is determined by the endianness of the byte stream. In big-endian, you will see high before low (at a lower address), in little-endian it's the other way around.
You say "this value is clearly 7", but it depends entirely on the encoding. If we assume full-width bytes, then in little-endian, yes; 7, 0 is 7. But in big endian it isn't.
For little-endian, what you want is
int i = byte[i] | (byte[i+1] << 8);
and for big-endian:
int i = (byte[i] << 8) | byte[i+1];
But other encoding schemes are available; for example, some schemes use 7-bit arithmetic, with the 8th bit as a continuation bit. Some schemes (UTF-8) put all the continuation bits in the first byte (so the first has only limited room for data bits), and 8 bits for the rest in the sequence.
If you simply want to put those two bytes next to each other in binary format, and see what that big number is in decimal, then you need to use this code:
if (BitConverter.IsLittleEndian)
{
byte[] tempByteArray = new byte[2] { Item[5], Item[4] };
ushort num = BitConverter.ToUInt16(tempByteArray, 0);
}
else
{
ushort num = BitConverter.ToUInt16(Item, 4);
}
If you use short num = BitConverter.ToInt16(Item, 4); as seen in the accepted answer, you are assuming that the first bit of those two bytes is the sign bit (1 = negative and 0 = positive). That answer also assumes you are using a big endian system. See this for more info on the sign bit.
If those bytes are the "parts" of an integer it works like that. But beware, that the order of bytes is platform specific and that it also depends on the length of the integer (16 bit=2 bytes, 32 bit=4bytes, ...)
In case that item[5] is the MSB
ushort result = BitConverter.ToUInt16(new byte[2] { Item[5], Item[4] }, 0);
int result = 256 * Item[5] + Item[4];

Categories

Resources