What is the equivalent of this swift Uint8 conversion in c#? - c#

i'm trying to convert this swift snippet in c#, but i'm a bit confused. Basically, i need this function to get the Ranging Data from beacons, as is indicated here: https://github.com/google/eddystone/tree/master/eddystone-uid
func getTxPower( frameData: NSData) -> Int
{
let count = frameData.length
var frameBytes = [UInt8](repeating: 0, count: count)
frameData.getBytes(&frameBytes, length: count)
let txPower = Int(Int8(bitPattern:frameBytes[1]))
return txPower
}
I get a NSData and i convert it into a UInt8 array. I just need to get the element in second position and convert it into a signed int.
This is the c# code i tried:
int getTxPower(NSData frameData)
{
var count = frameData.Length;
byte[] frameBytes = new byte[Convert.ToInt32(count)];
Marshal.Copy(frameData.Bytes, frameBytes,0,Convert.ToInt32(count));
int txPower = frameBytes[1];
return txPower;
}
I expected to get negative value too, because, as written in the link, the TxPower has a value range from -100 dBm to +20 dBm at a resolution of 1 dBm.
Thanks to those who will help me.

Presumably instead of:
int txPower = frameBytes[1];
(which just extends an unsigned byte to 32 bits)
int txPower = (int)((sbyte)frameBytes[1]);
(which reinterpets the unsigned byte as a signed byte, then extends to 32 bits)
Note that the (int) can be done implicitly, if it is clearer:
int txPower = (sbyte)frameBytes[1];

Related

How to interpret byte as int?

I have a byte array recived from Cpp program.
arr[0..3] // a real32,
arr[4] // a uint8,
How can I interpret arr[4] as int?
(uint)arr[4] // Err: can't implicitly convert string to int.
BitConverter.ToUint16(arr[4]) // Err: Invalid argument.
buff[0+4] as int // Err: must be reference or nullable type
Do I have to zero consecutive byte to interpret it as a UInt16?
OK, here is the confusion. Initially, I defined my class.
byte[] buff;
buff = getSerialBuffer();
public class Reading{
public string scale_id;
public string measure;
public int measure_revised;
public float wt;
}
rd = new Reading();
// !! here is the confusion... !!
// Err: Can't implicitly convert 'string' to 'int'
rd.measure = string.Format("{0}", buff[0 + 4]);
// then I thought, maybe I should convert buff[4] to int first ?
// I throw all forms of conversion here, non worked.
// but, later it turns out:
rd.measure_revised = buff[0+4]; // just ok.
So basically, I don't understand why this happens
rd.measure = string.Format("{0}", buff[0 + 4]);
//Err: Can't implicitly convert 'string' to 'int'
If buff[4] is a byte and byte is uint8, what does it mean by can't implicitly convert string to int ?... It confuses me.
You were almost there. Assuming you wanted a 32-bit int from the first 4 bytes (it's hard to interpret your question):
BitConverter.ToInt32(arr, 0);
This says to take the 4 bytes from arr, starting at index 0, and turn them into a 32-bit int. (docs)
Note that BitConverter uses the endianness of the computer, so on x86/x64 this will be little-endian.
If you want to use an explicit endianness, you'll need to construct the int by hand:
int littleEndian = arr[0] | (arr[1] << 8) | (arr[2] << 16) | (arr[3] << 24);
int bigEndian = arr[3] | (arr[2] << 8) | (arr[1] << 16) | (arr[0] << 24);
If instead you wanted a 32-bit floating-point number from the first 4 bytes, see Dmitry Bychenko's answer.
If I've understood you right you have byte (not string) array
byte[] arr = new byte[] {
182, 243, 157, 63, // Real32 - C# Single or float (e.g. 1.234f)
123 // uInt8 - C# byte (e.g. 123)
};
To get float and byte back you can try BitConverter
// read float / single starting from 0th byte
float realPart = BitConverter.ToSingle(arr, 0);
byte bytePart = arr[4];
Console.Write($"Real Part: {realPart}; Integer Part: {bytePart}");
Outcome:
Real Part: 1.234; Integer Part: 123
Same idea (BitConverter class) if we want to encode arr:
float realPart = 1.234f;
byte bytePart = 123;
byte[] arr =
BitConverter.GetBytes(realPart)
.Concat(new byte[] { bytePart })
.ToArray();
Console.Write(string.Join(" ", arr));
Outcome:
182 243 157 63 123

Byte conversion to INT64, under the hood

Good day. For a current project I need to know how datatypes are represented as bytes. For example, if I use :
long three = 500;var bytes = BitConverter.GetBytes(three);
I get the values 244,1,0,0,0,0,0,0. I get that it is a 64 bit value, and 8 bits go int a bit, thus are there 8 bytes. But how does 244 and 1 makeup 500? I tried Googling it, but all I get is use BitConverter. I need to know how the bitconverter works under the hood. If anybody can perhaps point me to an article or explain how this stuff works, it would be appreciated.
It's quite simple.
BitConverter.GetBytes((long)1); // {1,0,0,0,0,0,0,0};
BitConverter.GetBytes((long)10); // {10,0,0,0,0,0,0,0};
BitConverter.GetBytes((long)100); // {100,0,0,0,0,0,0,0};
BitConverter.GetBytes((long)255); // {255,0,0,0,0,0,0,0};
BitConverter.GetBytes((long)256); // {0,1,0,0,0,0,0,0}; this 1 is 256
BitConverter.GetBytes((long)500); // {244,1,0,0,0,0,0,0}; this is yours 500 = 244 + 1 * 256
If you need source code you should check Microsoft GitHub since implementation is open source :)
https://github.com/dotnet
From the source code:
// Converts a long into an array of bytes with length
// eight.
[System.Security.SecuritySafeCritical] // auto-generated
public unsafe static byte[] GetBytes(long value)
{
Contract.Ensures(Contract.Result<byte[]>() != null);
Contract.Ensures(Contract.Result<byte[]>().Length == 8);
byte[] bytes = new byte[8];
fixed(byte* b = bytes)
*((long*)b) = value;
return bytes;
}

How to convert a byte in two's complement form to it's integer value? C#

Given a byte in two's complement form, I am attempting to convert that byte into it's decimal representation. For example the byte 10000000 I would need to convert to -128 and the byte 01111111 I would need to convert to 127. I've looked at this answer, but I haven't been successful with taking one of the answers and making it work for me.
How do I go about doing the conversion?
CLARIFICATION: I'm trying to convert a byte in two's complement form into an int and not a string representation of a binary value into an `int.
public static int ConvertTwosComplementByteToInteger(byte rawValue)
{
// If a positive value, return it
if ((rawValue & 0x80) == 0)
{
return rawValue;
}
// Otherwise perform the 2's complement math on the value
return (byte)(~(rawValue - 0x01)) * -1;
}
You'll need to cast it to sbyte
byte b = 0b10000000;
sbyte s = (sbyte)b;
Console.WriteLine(s); // -128
You can convert this to an Int32 value and encapsulate this with
public static int ByteToInt32(byte value)
{
return (sbyte)value;
}

Cannot implicitly convert type 'long' to 'ulong'. (+random ulong)

I've been declaring a class with a ulong parameter, but when I want ot assign it in the constructor, the program tells me, thata something is wrong, but I don't understand what is. It is a ulong type, I don't want him to convert anything.
public class new_class
{
ulong data;
new_class()
{
Random rnd = new Random();
data = rnd.Next() * 4294967296 + rnd.Next(); //4294967296 = 2^32
}
}
bonus points if you can tell me if this declaration of random ulong is correct, it should first randomize int (32 bits) and store them in first 32 bits of ulong, then do new random number and store them in the second 32 bits.
Random.Next() only generates 31 bits of true randomness:
A 32-bit signed integer greater than or equal to zero and less than MaxValue.
Since the result is always positive, the most significant bit is always 0.
You could generate two ints and combine them to get a number with 62 bits of randomness:
data = ((ulong)rnd.Next() << 32) + rnd.Next();
If you want 64 bits of randomness, another method is to generate 8 random bytes using GetBytes:
byte[] buffer = new byte[8];
rnd.NextBytes(buffer);
data = BitConverter.ToUInt64(buffer, 0);
The problem here is that you're getting a long as the result of your expression. The error message should make that pretty clear:
Cannot implicitly convert type 'long' to 'ulong'.
An explicit conversion exists (are you missing a cast?)
You'd have to explicitly cast the result to ulong to assign it to data, making it
data = ((ulong) ( rnd.Next() * 4294967296 + rnd.Next() ) ;
However, your intent would be clearer, though, if you were to simply shift bits:
ulong data = ( ((ulong)rng.Next()) << 32 )
| ( ((ulong)rng.Next()) << 0 )
;
It would also be faster as bit shifts are a much simpler operation than multiplication.

converting hex number to signed short

I'm trying to convert a string that includes a hex value into its equivalent signed short in C#
for example:
the equivalent hex number of -1 is 0xFFFF (in two bytes)
I want to do the inverse, i.e I want to convert 0xFFFF into -1
I'm using
string x = "FF";
short y = Convert.ToInt16(x,16);
but the output y is 255 instead of -1, I need the signed number equivalent
can anyone help me?
thanks
When your input is "FF" you have the string representation in hex of a single byte.
If you try to assign it to a short (two bytes), the last bit is not considered for applying the sign to the converted number and thus you get the 255 value.
Instead a string representation of "FFFF" represents two bytes where the last bit is set to 1 so the result, if assigned to a signed type like Int16, is negative while, if assigned to an unsigned type like ushort, is 65535-
string number = "0xFFFF";
short n = Convert.ToInt16(number, 16);
ushort u = Convert.ToUInt16(number, 16);
Console.WriteLine(n);
Console.WriteLine(u);
number = "0xFF";
byte b = Convert.ToByte(number, 16);
short x = Convert.ToInt16(number, 16);
ushort z = Convert.ToUInt16(number, 16);
Console.WriteLine(n);
Console.WriteLine(x);
Console.WriteLine(z);
Output:
-1
65535
-1
255
255
You're looking to convert the string representation of a signed byte, not short.
You should use Convert.ToSByte(string) instead.
A simple unit test to demonstrate
[Test]
public void MyTest()
{
short myValue = Convert.ToSByte("FF", 16);
Assert.AreEqual(-1, myValue);
}
Please see http://msdn.microsoft.com/en-us/library/bb311038.aspx for full details on converting between hex strings and numeric values.

Categories

Resources