Scenario:
I have a string of hexadecimal characters which encode 8-bit signed integers. Each two characters represent a byte which employ the leftmost (MSB) bit as the sign (rather than two's complement). I am converting these to signed ints within a loop and wondered if there's a better way to do it. There are too many conversions and I am sure there's a more efficient method that I am missing.
Current Code:
string strData = "FFC000407F"; // example input data, encodes: -127, -64, 0, 64, 127
int v;
for (int x = 0; x < strData.Length/2; x++)
{
v = HexToInt(strData.Substring(x * 2, 2));
Console.WriteLine(v); // do stuff with v
}
private int HexToInt(string _hexData)
{
string strBinary = Convert.ToString(Convert.ToInt32(_hexData, 16), 2).PadLeft(_hexData.Length * 4, '0');
int i = Convert.ToInt32(strBinary.Substring(1, 7), 2);
i = (strBinary.Substring(0, 1) == "0" ? i : -i);
return i;
}
Question:
Is there a more streamlined and direct approach to reading two hex characters and converting them to an int when they represent a signed int (-127 to 127) using the leftmost bit as the sign?
Just covert it to an int and handle the sign bit by testing the size of the converted number and masking off the sign bit.
private int HexToInt(string _hexData)
{
int number = Convert.ToInt32(_hexData, 16);
if (number >= 0x80)
return -(number & 0x7F);
return number;
}
Like this: (Tested)
(int)unchecked((sbyte)Convert.ToByte("FF", 16))
Explanation:
The unchecked cast to sbyte will perform a direct cast to a signed byte, interpreting the final bit as a sign bit.
However, it has a different range, so it won't help you.
sbyte SignAndMagnitudeToTwosComplement(byte b)
{
var isNegative = ((b & 0x80) >> 7);
return (sbyte)((b ^ 0x7F * isNegative) + isNegative);
}
Then:
sbyte ReadSignAndMagnitudeByte(string hex)
{
return SignAndMagnitudeToTwosComplement(Convert.ToByte(hex,16));
}
Related
I need to read 8 bool values and create a Byte from it, How is this done?
rather than hardcoding the following 1's and 0's - how can i create that binary value from a series of Boolean values in c#?
byte myValue = 0b001_0000;
There's many ways of doing it, for example to build it from an array:
bool[] values = ...;
byte result = 0;
for(int i = values.Length - 1; i >= 0; --i) // assuming you store them "in reverse"
result = result | (values[i] << (values.Length - 1 - i));
My solution with Linq:
public static byte CreateByte(bool[] bits)
{
if (bits.Length > 8)
{
throw new ArgumentOutOfRangeException();
}
return (byte)bits.Reverse().Select((val, i) => Convert.ToByte(val) << i).Sum();
}
The call to Reverse() is optional and dependent on if you want index 0 to be the LSB (without Reverse) or the MSB (with Reverse)
var values = new bool[8];
values [7] = true;
byte result = 0;
for (var i = 0; i < 8; i++)
{
//edited to bit shifting because of community complains :D
if (values [i]) result |= (byte)(1 << i);
}
// result => 128
This might be absolutely overkill, but I felt like playing around with SIMD. It could've probably been written even better but I don't know SIMD all that well.
If you want reverse bit order to what this generates, just remove the shuffling part from the SIMD approach and change (7 - i) to just i
For those not familiar with SIMD, this approach is about 3 times faster than a normal for loop.
public static byte ByteFrom8Bools(ReadOnlySpan<bool> bools)
{
if (bools.Length < 8)
Throw();
static void Throw() // Throwing in a separate method helps JIT produce better code, or so I've heard
{
throw new ArgumentException("Not enough booleans provided");
}
// these are JIT compile time constants, only one of the branches will be compiled
// depending on the CPU running this code, eliminating the branch entirely
if(Sse2.IsSupported && Ssse3.IsSupported)
{
// copy out the 64 bits all at once
ref readonly bool b = ref bools[0];
ref bool refBool = ref Unsafe.AsRef(b);
ulong ulongBools = Unsafe.As<bool, ulong>(ref refBool);
// load our 64 bits into a vector register
Vector128<byte> vector = Vector128.CreateScalarUnsafe(ulongBools).AsByte();
// this is just to propagate the 1 set bit in true bools to the most significant bit
Vector128<byte> allTrue = Vector128.Create((byte)1);
Vector128<byte> compared = Sse2.CompareEqual(vector, allTrue);
// reverse the bytes we care about, leave the rest in their place
Vector128<byte> shuffleMask = Vector128.Create((byte)7, 6, 5, 4, 3, 2, 1, 0, 8, 9, 10, 11, 12, 13, 14, 15);
Vector128<byte> shuffled = Ssse3.Shuffle(compared, shuffleMask);
// move the most significant bit of each byte into a bit of int
int mask = Sse2.MoveMask(shuffled);
// returning byte = returning the least significant byte from int
return (byte)mask;
}
else
{
// fall back to a more generic algorithm if there aren't the correct instructions on the CPU
byte bits = 0;
for (int i = 0; i < 8; i++)
{
bool b = bools[i];
bits |= (byte)(Unsafe.As<bool, byte>(ref b) << (7 - i));
}
return bits;
}
}
I'm looking to get the last 10 bits of a 32 Bit integer,
Essentially what I did was take a number and jam it into the first 22 bits, I can extract that number back out quite nicely with
int test = Convert.ToInt32(uIActivityId >> nMethodBits); //method bits == 10 so this just pushes the last ten bits off the range
Where test results in the number I put in in the first place.
But now I'm stumped. So my question to you guys is. How do i get the last ten bits of a 32 bit integer?
First create a mask that has a 1 bit for those bits you want and a 0 bit for those you're not interested in. Then use binary & to keep only the relevant bits.
const uint mask = (1 << 10) - 1; // 0x3FF
uint last10 = input & mask;
This one is another try !!
List<bool> BitsOfInt(int input , int bitCount)
{
List<bool> outArray = new BitArray (BitConverter.GetBytes(input)).OfType<bool>().ToList();
return outArray.GetRange(outArray.Count - bitCount, bitCount);
}
int i = Convert.ToInt32("11100111101", 2);
int mask = Convert.ToInt32("1111111111", 2);
int test = Convert.ToInt32(( i&mask));
int j = Convert.ToInt32("1100111101", 2);
if (test == j)
System.Console.Out.WriteLine("it works");
public bool[] GetLast10Bits(int input) {
BitArray array = new BitArray(new []{input});
List<bool> bits = new List<bool>();
if (array.Length > 10) {
return Enumerable.Range(0, 10).Select(i => array[i]).ToArray();
}
return new bool[0];
}
In Java BigDecimal class contains values as A*pow(10,B) where A is 2's complement which non-fix bit length and B is 32bit integer.
In C# Decimal contains values as pow (–1,s) × c × pow(10,-e) where the sign s is 0 or 1, the coefficient c is given by 0 ≤ c < pow(2,96) , and the scale e is such that 0 ≤ e ≤ 28 .
And i want to convert Java BigDecimal to Something like c# Decimal in JAVA.
Can you help me .
I have some thing like this
class CS_likeDecimal
{
private int hi;// for 32bit most sinificant bit of c
private int mid;// for 32bit in the middle
private int lo;// for 32 bit the last sinificant bit
.....
public CS_likeDecimal(BigDecimal data)
{
....
}
}
In fact I found this What's the best way to represent System.Decimal in Protocol Buffers?.
it a protocol buffer for send c# decimal ,but in the protobuff-net project use this to send message between c# (but i want between c# and JAVA)
message Decimal {
optional uint64 lo = 1; // the first 64 bits of the underlying value
optional uint32 hi = 2; // the last 32 bis of the underlying value
optional sint32 signScale = 3; // the number of decimal digits, and the sign
}
Thanks,
the Decimal I use in protobuf-net is primarily intended to support the likely usage of protobuf-net being used at both ends of the pipe, which supports a fixed range. It sounds like the range of the two types in discussion is not the same, so: are not robustly compatible.
I would suggest explicitly using an alternative representation. I don't know what representations are available to Java's BigDecimal - whether there is a pragmatic byte[] version, or a string version.
If you are confident that the scale and range won't be a problem, then it should be possible to fudge between the two layouts with some bit-fiddling.
I needed to write a BigDecimal to/from .Net Decimal converter.
Using this reference:
http://msdn.microsoft.com/en-us/library/system.decimal.getbits.aspx
I wrote this code that could works:
public static byte[] BigDecimalToNetDecimal(BigDecimal paramBigDecimal) throws IllegalArgumentException
{
// .Net Decimal target
byte[] result = new byte[16];
// Unscaled absolute value
BigInteger unscaledInt = paramBigDecimal.abs().unscaledValue();
int bitLength = unscaledInt.bitLength();
if (bitLength > 96)
throw new IllegalArgumentException("BigDecimal too big for .Net Decimal");
// Byte array
byte[] unscaledBytes = unscaledInt.toByteArray();
int unscaledFirst = 0;
if (unscaledBytes[0] == 0)
unscaledFirst = 1;
// Scale
int scale = paramBigDecimal.scale();
if (scale > 28)
throw new IllegalArgumentException("BigDecimal scale exceeds .Net Decimal limit of 28");
result[1] = (byte)scale;
// Copy unscaled value to bytes 8-15
for (int pSource = unscaledBytes.length - 1, pTarget = 15; (pSource >= unscaledFirst) && (pTarget >= 4); pSource--, pTarget--)
{
result[pTarget] = unscaledBytes[pSource];
}
// Signum at byte 0
if (paramBigDecimal.signum() < 0)
result[0] = -128;
return result;
}
public static BigDecimal NetDecimalToBigDecimal(byte[] paramNetDecimal)
{
int scale = paramNetDecimal[1];
int signum = paramNetDecimal[0] >= 0 ? 1 : -1;
byte[] magnitude = new byte[12];
for (int ptr = 0; ptr < 12; ptr++) magnitude[ptr] = paramNetDecimal[ptr + 4];
BigInteger unscaledInt = new BigInteger(signum, magnitude);
return new BigDecimal(unscaledInt, scale);
}
In C I would do this
int number = 3510;
char upper = number >> 8;
char lower = number && 8;
SendByte(upper);
SendByte(lower);
Where upper and lower would both = 54
In C# I am doing this:
int number = Convert.ToInt16("3510");
byte upper = byte(number >> 8);
byte lower = byte(number & 8);
char upperc = Convert.ToChar(upper);
char lowerc = Convert.ToChar(lower);
data = "GETDM" + upperc + lowerc;
comport.Write(data);
However in the debugger number = 3510, upper = 13 and lower = 0
this makes no sense, if I change the code to >> 6 upper = 54 which is absolutely strange.
Basically I just want to get the upper and lower byte from the 16 bit number, and send it out the com port after "GETDM"
How can I do this? It is so simple in C, but in C# I am completely stumped.
Your masking is incorrect - you should be masking against 255 (0xff) instead of 8. Shifting works in terms of "bits to shift by" whereas bitwise and/or work against the value to mask against... so if you want to only keep the bottom 8 bits, you need a mask which just has the bottom 8 bits set - i.e. 255.
Note that if you're trying to split a number into two bytes, it should really be a short or ushort to start with, not an int (which has four bytes).
ushort number = Convert.ToUInt16("3510");
byte upper = (byte) (number >> 8);
byte lower = (byte) (number & 0xff);
Note that I've used ushort here instead of byte as bitwise arithmetic is easier to think about when you don't need to worry about sign extension. It wouldn't actually matter in this case due to the way the narrowing conversion to byte works, but it's the kind of thing you should be thinking about.
You probably want to and it with 0x00FF
byte lower = Convert.ToByte(number & 0x00FF);
Full example:
ushort number = Convert.ToUInt16("3510");
byte upper = Convert.ToByte(number >> 8);
byte lower = Convert.ToByte(number & 0x00FF);
char upperc = Convert.ToChar(upper);
char lowerc = Convert.ToChar(lower);
data = "GETDM" + upperc + lowerc;
Even if the accepted answer fits the question, I consider it incomplete due to the simple fact that the question contains int and not short in header and it is misleading in search results, and as we know Int32 in C# has 32 bits and thus 4 bytes. I will post here an example that will be useful in the case of Int32 use. In the case of an Int32 we have:
LowWordLowByte
LowWordHighByte
HighWordLowByte
HighWordHighByte.
And as such, I have created the following method for converting the Int32 value into a little endian Hex string, in which every byte is separated from the others by a Whitespace. This is useful when you transmit data and want the receiver to do the processing faster, he can just Split(" ") and get the bytes represented as standalone hex strings.
public static String IntToLittleEndianWhitespacedHexString(int pValue, uint pSize)
{
String result = String.Empty;
pSize = pSize < 4 ? pSize : 4;
byte tmpByte = 0x00;
for (int i = 0; i < pSize; i++)
{
tmpByte = (byte)((pValue >> i * 8) & 0xFF);
result += tmpByte.ToString("X2") + " ";
}
return result.TrimEnd(' ');
}
Usage:
String value1 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x927C, 4);
String value2 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x3FFFF, 4);
String value3 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x927C, 2);
String value4 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x3FFFF, 1);
The result is:
7C 92 00 00
FF FF 03 00
7C 92
FF.
If it is hard to understand the method which I created, then the following might be a more comprehensible one:
public static String IntToLittleEndianWhitespacedHexString(int pValue)
{
String result = String.Empty;
byte lowWordLowByte = (byte)(pValue & 0xFF);
byte lowWordHighByte = (byte)((pValue >> 8) & 0xFF);
byte highWordLowByte = (byte)((pValue >> 16) & 0xFF);
byte highWordHighByte = (byte)((pValue >> 24) & 0xFF);
result = lowWordLowByte.ToString("X2") + " " +
lowWordHighByte.ToString("X2") + " " +
highWordLowByte.ToString("X2") + " " +
highWordHighByte.ToString("X2");
return result;
}
Remarks:
Of course insteand of uint pSize there can be an enum specifying Byte, Word, DoubleWord
Instead of converting to hex string and creating the little endian string, you can convert to chars and do whatever you want to do.
Hope this will help someone!
Shouldn't it be:
byte lower = (byte) ( number & 0xFF );
To be a little more creative
[System.Runtime.InteropServices.StructLayout( System.Runtime.InteropServices.LayoutKind.Explicit )]
public struct IntToBytes {
[System.Runtime.InteropServices.FieldOffset(0)]
public int Int32;
[System.Runtime.InteropServices.FieldOffset(0)]
public byte First;
[System.Runtime.InteropServices.FieldOffset(1)]
public byte Second;
[System.Runtime.InteropServices.FieldOffset(2)]
public byte Third;
[System.Runtime.InteropServices.FieldOffset(3)]
public byte Fourth;
}
I was looking for a way to convert IEEE floating point numbers to IBM floating point format for a old system we are using.
Is there a general formula we can use in C# to this end?
Use:
// https://en.wikipedia.org/wiki/IBM_hexadecimal_floating-point
//
// float2ibm(-118.625F) == 0xC276A000
// 1 100 0010 0111 0110 1010 0000 0000 0000
//
// IBM/370 single precision, 4 bytes
// xxxx.xxxx xxxx.xxxx xxxx.xxxx xxxx.xxxx
// s|-exp--| |--------fraction-----------|
// (7) (24)
//
// value = (-1)**s * 16**(e - 64) * .f range = 5E-79 ... 7E+75
//
static int float2ibm(float fromFormat)
{
byte[] bytes = BitConverter.GetBytes(fromFormat);
int fconv = (bytes[3] << 24) | (bytes[2] << 16) | (bytes[1] << 8)| bytes[0];
if (fconv == 0)
return 0;
int fmant = (0x007fffff & fconv) | 0x00800000;
int t = (int)((0x7f800000 & fconv) >> 23) - 126;
while (0 != (t & 0x3)) {
++t;
fmant >>= 1;
}
fconv = (int)(0x80000000 & fconv) | (((t >> 2) + 64) << 24) | fmant;
return fconv; // Big-endian order
}
I changed a piece of code called static void float_to_ibm(int from[], int to[], int n, int endian).
The code above can be run correctly on a PC.
from is a little-endian float number.
return value is a big-endian IBM float number, but stored in type int.
An obvious approach would be to use textual representation of the number as the interchange format.
I recently had to convert one float to another. It looks like the XDR format uses an odd format for its floats. So when converting from XDR to standard floats, this code did it.
#include <rpc/rpc.h>
// Read in an XDR float array, copy to a standard float array.
// The 'out' array needs to be allocated before the function call.
bool convertFromXdrFloatArray(float *in, float *out, long size)
{
XDR xdrs;
xdrmem_create(&xdrs, (char *)in, size*sizeof(float), XDR_DECODE);
for(int i = 0; i < size; i++)
{
if(!xdr_float(&xdrs, out++)) {
fprintf(stderr, "%s:%d:ERROR:xdr_float\n", __FILE__, __LINE__);
exit(1);
}
}
xdr_destroy(&xdrs);
return true;
}
Using speeding's answer, I added the following that may be useful in some cases:
/// <summary>
/// Converts an IEEE floating number to its string representation (4 or 8 ASCII codes).
/// It is useful for SAS XPORT files format.
/// </summary>
/// <param name="from_">IEEE number</param>
/// <param name="padTo8_">When true, the output is 8 characters rather than 4</param>
/// <returns>Printable string according to the hardware's endianness</returns>
public static string Float2IbmAsAsciiCodes(float from_, bool padTo8_ = true)
{
StringBuilder sb = new StringBuilder();
string s;
byte[] bytes = BitConverter.GetBytes(Float2Ibm(from_)); // Big-endian order
if (BitConverter.IsLittleEndian)
{
// Revert bytes order
for (int i = 3; i > -1; i--)
sb.Append(Convert.ToChar(bytes[i]));
s = sb.ToString();
if (padTo8_)
s = s.PadRight(8, '\0');
return s;
}
else
{
for (int i = 0; i < 8; i++)
sb.Append(Convert.ToChar(bytes[i]));
s = sb.ToString();
if (padTo8_)
s = s.PadRight(8, '\0');
return s;
}
}