I start with a signed byte array and convert to unsigned.. so is the printed result correct?
byte[] unsigned = new byte[] {10,100,120,180,200,220,240};
sbyte[] signed = Utils.toSignedByteArray(unsigned);
And the print (I just append them with a StringBuilder):
signed: [10,100,120,-76,-56,-36,-16]
unsigned : [10,100,120,180,200,220,240]
where:
public static sbyte[] toSignedByteArray(byte[] unsigned){
sbyte[] signed = new sbyte[unsigned.Length];
Buffer.BlockCopy(unsigned, 0, signed, 0, unsigned.Length);
return signed;
}
If I change to this I get the same result.
sbyte[] signed = (sbyte[])(Array)unsigned;
Shouldn't -128 (signed) become 0, -118 become 10, and so on.. and not 10 (signed) = 10 (unsigned)!?
Because
sbyte -128 to 127
byte 0 to 255
So??
Signed integers are represented in the Two's complement system.
Examples:
Bits Unsigned 2's complement
value value
00000000 0 0
00000001 1 1
00000010 2 2
01111110 126 126
01111111 127 127
10000000 128 −128
10000001 129 −127
10000010 130 −126
11111110 254 −2
11111111 255 −1
Related
I'm trying to port the Accidental Noise library from C# to Lua. I encounter an issue when trying to port the FNV-1A algorithm. The result of the multiplication with the prime doesn't match when using same input values.
First I'd like to show the C# code of the algorithm:
// The "new" FNV-1A hashing
private const UInt32 FNV_32_PRIME = 0x01000193;
private const UInt32 FNV_32_INIT = 2166136261;
public static UInt32 FNV32Buffer(Int32[] uintBuffer, UInt32 len)
{
//NOTE: Completely untested.
var buffer = new byte[len];
Buffer.BlockCopy(uintBuffer, 0, buffer, 0, buffer.Length);
var hval = FNV_32_INIT;
for (var i = 0; i < len; i++)
{
hval ^= buffer[i];
hval *= FNV_32_PRIME;
}
return hval;
}
This function is called as such (simplified) elsewhere in the codebase:
public static UInt32 HashCoordinates(Int32 x, Int32 y, Int32 seed)
{
Int32[] d = { x, y, seed };
return FNV32Buffer(d, sizeof(Int32) * 3);
}
I noticed the sizeof(Int32) result is always multiplied by the number of elements in the Int32[] array. In this case (on my machine) the result is 12, which causes the buffer size in the FNV32Buffer function to be an array of 12 bytes.
Inside the for loop we see the following:
A bitwise XOR operation is performed on hval
hval is multiplied by a prime number
The result of the multiply operation doesn't match with the result of my Lua implementation.
My Lua implementation is as such:
local FNV_32_PRIME = 0x01000193
local FNV_32_INIT = 0x811C9DC5
local function FNV32Buffer(buffer)
local bytes = {}
for _, v in ipairs(buffer) do
local b = toBits(v, 32)
for i = 1, 32, 8 do
bytes[#bytes + 1] = string.sub(b, i, i + 7)
end
end
local hash = FNV_32_INIT
for i, v in ipairs(bytes) do
hash = bit.bxor(hash, v)
hash = hash * FNV_32_PRIME
end
return hash
end
I don't supply the buffer length in my implementation as Lua's Bitwise operators always work on 32-bit signed integers.
In my implementation I create a bytes array and for each number in the buffer table I extract the bytes. When comparing the C# and Lua byte arrays I get mostly similar results:
byte #
C#
Lua
1
00000000
00000000
2
00000000
00000000
3
00000000
00000000
4
00000000
00000000
5
00000000
00000000
6
00000000
00000000
7
00000000
00000000
8
00000000
00000000
9
00101100
00000000
10
00000001
00000000
11
00000000
00000001
12
00000000
00101100
It seems due to endianness the byte ordering is different, but this I can change. I don't believe this has anything to do with my issue right now.
For the C# and Lua byte arrays, I loop through each byte and perform the FNV-1A algorithm on each byte.
When using the values {0, 0, 300} (x, y, seed) as input for the C# and Lua functions I get the following results after the first iteration of the FNV hashing loop is finished:
C#: 00000101_00001100_01011101_00011111 (84696351)
Lua: 01111110_10111100_11101000_10111000 (2126309560)
As can be seen the result after just the first hashing loop are very different. From debugging I can see the numbers diverge when multiplying with the prime. I believe the cause could be that Lua uses signed numbers by default, whereas the C# implementation works on unsigned integers. Or perhaps the results are different due to differences in endianness?
I did read that Lua uses unsigned integers when working with hex literals. Since FNV_32_PRIME is a hex literal, I guess it should work the same as the C# implementation, yet the end result differs.
How can I make sure the Lua implementation matches the results of the C# implementation?
LuaJIT supports CPU native datatypes.
64-bit values (suffixed with LL) are used to avoid precision loss of multiplication result.
-- LuaJIT 2.1 required
local ffi = require'ffi'
-- The "new" FNV-1A hashing
local function FNV32Buffer(data, size_in_bytes)
data = ffi.cast("uint8_t*", data)
local hval = 0x811C9DC5LL
for j = 0, size_in_bytes - 1 do
hval = bit.bxor(hval, data[j]) * 0x01000193LL
end
return tonumber(bit.band(2^32-1, hval))
end
local function HashCoordinates(x, y, seed)
local d = ffi.new("int32_t[?]", 3, x, y, seed)
return FNV32Buffer(d, ffi.sizeof(d))
end
print(HashCoordinates(0, 0, 300)) --> 3732851086
Arithmetic on 32 bit unsigned numbers does not necessarily produce a 32 bit number.
Not tested, but I think the result of the multiplication with the prime number should be normalized using bit.toBit() as stated in the reference you provide.
I'm trying to look at how a System.Single value is represented in memory.
From what I've read, System.Single is represented like this:
1 sign bit (s), 23 bit fractional significand (f) and a 8 bit biased exponent (e).
(-1)^s * 1.f * 2^(e-127)
In the case of 16777216 , s = 0, f = 00000000000000000000000 (23 zeros), e = 151 = 10010111
ie. 1.00000000000000000000000 * 2^24
In memory, I'd expect it to look like this (sign bit, fractional significand, biased exponent):
0 00000000000000000000000 10010111
Or in bytes:
00000000 00000000 00000000 10010111
But instead it's giving me this:
00000000 00000000 10000000 01001011
Looks like the last bit is missing from the biased exponent, and a random 1 at the start of the 3rd byte, why is this?
I was able to find the right exponent by reversing the bits in each byte:
00000000 00000000 00000001 11010010
Taking the last 9 bits and reversing those again:
00000000 00000000 0000000 010010111
This is now equal to what I'd expect, but what is with this strange order?
What format is this binary number stored in?
Here's my code:
using System;
using System.Linq;
namespace SinglePrecision
{
class Program
{
static void Main(string[] args)
{
Single a = 16777216;
byte[] aBytes = BitConverter.GetBytes(a);
string s = string.Join(" ", aBytes.Select(x => Convert.ToString(x, 2).PadLeft(8, '0')));
//s = 00000000 00000000 10000000 01001011
}
}
}
First, you got the order of the parts wrong. It is sign bit s, then exponent e, then fraction f, so your binary representation, which you otherwise calculated correctly, would be
0 10010111 00000000000000000000000
s e f
These bits are stored in 4 continuous bytes of memory:
01001011 10000000 00000000 00000000
se f
byte1 byte2 byte3 byte4
but because your system is little-endian, they are stored in reverse order:
00000000 00000000 10000000 01001011
byte4 byte3 byte2 byte1
Endiannes reverses the order of bytes, but it does not reverse the order of bits within a byte.
The rightmost byte is the first logical byte of the float value, and its leftmost bit is the sign bit, which is 0.
The second from the right byte is the second logical byte, and its lefmost bit is the last bit of your exponent.
I am having a question, I need to convert to two u short numbers lets say 1 and 2 to 1 byte. someting like
0 0 1 0 values of 2 and 0 0 0 1 value of 1
so in result i get a byte with value 00100001, Is it possible, I am not a master low level coder.
This should work:
(byte)(((value1 & 0xF)<<4) | (value2 & 0xF))
I am not a master low level coder.
Well, now is the time to become one!
Edit: this answer was made before the question was clear enough to understand exactly what was required. See other answers.
Use a 'bit mask' on the two numbers, then bitwise-OR them together.
I can't quite tell how you exactly want it, but let's say you wanted the first 4 bits of the first ushort, then the last 4 bits of the second ushort. To note: ushort is 16 bits wide.
ushort u1 = 44828; //10101111 00011100 in binary
ushort u2 = 65384; //11111111 01101000 in binary
int u1_first4bits = (u1 & 0xF000) >> 8;
The 'mask' is 0xF000. It masks over u1:
44828 1010 1111 0001 1100
0xF000 1111 0000 0000 0000
bitwise-AND 1010 0000 0000 0000
The problem is, this new number is still 16 bits long
- we must shift it by 8 bits with >> 8 to make it
0000 0000 1010 0000
Then another mask operation on the second number:
int u2_last4bits = u2 & 0x000F;
Illustrated:
65384 1111 1111 0110 1000
0x000F 0000 0000 0000 1111
bitwise-AND 0000 0000 0000 1000
Here, we did not need to shift the bits, as they are already where we want them.
Then we bitwise-OR them together:
byte b1 = (byte)(u1_first4bits | u2_last4bits);
//b1 is now 10101000 which is 168
Illustrated:
u1_first4bits 0000 0000 1010 0000
u2_last4bits 0000 0000 0000 1000
bitwise-OR 0000 0000 1010 1000
Notice that u1_first4bits and u2_first4bits needed to be of type int - this is because C# bitwise operations return int. To create our byte b1, we had to cast the bitwase-OR operation to a byte.
Assuming, you want to take the 2 ushorts (16 bit each) and convert them to a 32 bit representation (integer), you can use the "BitArray" Class, fill it with a 4 byte array, and convert it to an integer.
The following example will produce:
00000000 00000010 00000000 00000001
which is
131073
as integer.
ushort x1 = 1;
ushort x2 = 2;
//get the bytes of the ushorts. 2 byte per number.
byte[] b1 = System.BitConverter.GetBytes(x1);
byte[] b2 = System.BitConverter.GetBytes(x2);
//Combine the two arrays to one array of length 4.
byte[] result = new Byte[4];
result[0] = b1[0];
result[1] = b1[1];
result[2] = b2[0];
result[3] = b2[1];
//fill the bitArray.
BitArray br = new BitArray(result);
//test output.
int c = 0;
for (int i = br.Length -1; i >= 0; i--){
Console.Write(br.Get(i)? "1":"0");
if (++c == 8)
{
Console.Write(" ");
c = 0;
}
}
//convert to int and output.
int[] array = new int[1];
br.CopyTo(array, 0);
Console.WriteLine();
Console.Write(array[0]);
Console.ReadLine();
Of course you can alter this example and throw away 1 Byte per ushort. But this wouldn't be a correct "conversion" then.
I want convert integer to byte. If integer bigger than byte range{0,255} then counting from the start. I mean if integer = 260 then function return 5 or if int = 1120 then return 96 and so on.
You can use:
byte myByte = (byte)(myInt & 0xFF);
However, note that 260 will give 4 instead of five. (E.g. 255->valid, 256->0, 257->1, 258->2, 259->3, 260->4)
If you really want 260 to give 5, then you are probably looking for the remainder after dividing by 255. That can be calculated using:
byte myByte = (byte)(myInt % 255);
Best way to describe my miss understanding is with the code itself:
var emptyByteArray = new byte[2];
var specificByteArray = new byte[] {150, 105}; //0x96 = 150, 0x69 = 105
var bitArray1 = new BitArray(specificByteArray);
bitArray1.CopyTo(emptyByteArray, 0); //[0]: 150, [1]:105
var hexString = "9669";
var intValueForHex = Convert.ToInt32(hexString, 16); //16 indicates to convert from hex
var bitArray2 = new BitArray(new[] {intValueForHex}) {Length = 16}; //Length=16 truncates the BitArray
bitArray2.CopyTo(emptyByteArray, 0); //[0]:105, [1]:150 (inversed, why??)
I've been reading that the bitarray iterates from the LSB to the MSB, what's the best way for me to initialize the bitarray starting from a hex string then?
I think you are thinking about it wrong. Why are you even using a BitArray? Endianness is a byte-related convention, BitArray is just an array of bits. Since it is least-significant bit first, the correct way to store a 32-bit number in a bit array is with bit 0 at index 0 and bit 31 at index 31. This isn't just just my personal bias towards little-endianness (bit 0 should be in byte 0 not byte 3 for goodness sake), it's because BitArray stores bit 0 of a byte at index 0 in the array. It also stores bit 0 of a 32-bit integer in bit 0 of the array, no matter the endianness of the platform you are on.
For example, instead of your integer 9669, let's look at 1234. No matter what platform you are on, that 16-bit number has the following bit representation, because we write a hex number with the most significant hex digit 1 to the left and the least significant hex digit 4 to the right, bit 0 is on the right (a human convention):
1 2 3 4
0001 0010 0011 0100
No matter how an architecture orders the bytes, bit 0 of a 16-bit number always means the least-significant bit (the right-most here) and bit 15 means the most-significant bit (the left-most here). Due to this, your bit array will always be like this, with bit 0 on the left because that's the way I read an array (with index 0 being bit 0 and index 15 being bit 15):
---4--- ---3--- ---2--- ---1---
0 0 1 0 1 1 0 0 0 1 0 0 1 0 0 0
What you are doing is trying to impose the byte order you want onto an array of bits where it doesn't belong. If you want to reverse the bytes, then you'll get this in the bit array which makes a lot less sense, and means you'll have to reverse the bytes again when you get the integer back out:
---2--- ---1--- ---4--- ---3---
0 1 0 0 1 0 0 0 0 0 1 0 1 1 0 0
I don't think this makes any kind of sense for storing an integer. If you want to store the big-endian representation of a 32-bit number in the BitArray then what you are really storing is a byte array that just happens to be the big-endian representation of a 32-bit number and you should convert to a byte array first make it big-endian if necessary before putting it in the BitArray:
int number = 0x1234;
byte[] bytes = BitConverter.GetBytes(number);
if (BitConverter.IsLittleEndian)
{
bytes = bytes.Reverse().ToArray();
}
BitArray ba = new BitArray(bytes);