I need to read 8 bool values and create a Byte from it, How is this done?
rather than hardcoding the following 1's and 0's - how can i create that binary value from a series of Boolean values in c#?
byte myValue = 0b001_0000;
There's many ways of doing it, for example to build it from an array:
bool[] values = ...;
byte result = 0;
for(int i = values.Length - 1; i >= 0; --i) // assuming you store them "in reverse"
result = result | (values[i] << (values.Length - 1 - i));
My solution with Linq:
public static byte CreateByte(bool[] bits)
{
if (bits.Length > 8)
{
throw new ArgumentOutOfRangeException();
}
return (byte)bits.Reverse().Select((val, i) => Convert.ToByte(val) << i).Sum();
}
The call to Reverse() is optional and dependent on if you want index 0 to be the LSB (without Reverse) or the MSB (with Reverse)
var values = new bool[8];
values [7] = true;
byte result = 0;
for (var i = 0; i < 8; i++)
{
//edited to bit shifting because of community complains :D
if (values [i]) result |= (byte)(1 << i);
}
// result => 128
This might be absolutely overkill, but I felt like playing around with SIMD. It could've probably been written even better but I don't know SIMD all that well.
If you want reverse bit order to what this generates, just remove the shuffling part from the SIMD approach and change (7 - i) to just i
For those not familiar with SIMD, this approach is about 3 times faster than a normal for loop.
public static byte ByteFrom8Bools(ReadOnlySpan<bool> bools)
{
if (bools.Length < 8)
Throw();
static void Throw() // Throwing in a separate method helps JIT produce better code, or so I've heard
{
throw new ArgumentException("Not enough booleans provided");
}
// these are JIT compile time constants, only one of the branches will be compiled
// depending on the CPU running this code, eliminating the branch entirely
if(Sse2.IsSupported && Ssse3.IsSupported)
{
// copy out the 64 bits all at once
ref readonly bool b = ref bools[0];
ref bool refBool = ref Unsafe.AsRef(b);
ulong ulongBools = Unsafe.As<bool, ulong>(ref refBool);
// load our 64 bits into a vector register
Vector128<byte> vector = Vector128.CreateScalarUnsafe(ulongBools).AsByte();
// this is just to propagate the 1 set bit in true bools to the most significant bit
Vector128<byte> allTrue = Vector128.Create((byte)1);
Vector128<byte> compared = Sse2.CompareEqual(vector, allTrue);
// reverse the bytes we care about, leave the rest in their place
Vector128<byte> shuffleMask = Vector128.Create((byte)7, 6, 5, 4, 3, 2, 1, 0, 8, 9, 10, 11, 12, 13, 14, 15);
Vector128<byte> shuffled = Ssse3.Shuffle(compared, shuffleMask);
// move the most significant bit of each byte into a bit of int
int mask = Sse2.MoveMask(shuffled);
// returning byte = returning the least significant byte from int
return (byte)mask;
}
else
{
// fall back to a more generic algorithm if there aren't the correct instructions on the CPU
byte bits = 0;
for (int i = 0; i < 8; i++)
{
bool b = bools[i];
bits |= (byte)(Unsafe.As<bool, byte>(ref b) << (7 - i));
}
return bits;
}
}
Related
I want to create a custom data type which is 4 bits (nibble).
One option is this -
byte source = 0xAD;
var hiNybble = (source & 0xF0) >> 4; //Left hand nybble = A
var loNyblle = (source & 0x0F); //Right hand nybble = D
However, I want to store each 4 bits in an array.
So for example, a 2 byte array,
00001111 01010000
would be stored in the custom data type array as 4 nibbles -
0000
1111
0101
0000
Essentially I want to operate on 4 bit types.
Is there any way I can convert the array of bytes into array of nibbles?
Appreciate an example.
Thanks.
You can encapsulate a stream returning 4-bit samples by reading then converting (written from a phone without a compiler to test. Expect typos and off-by-one errors):
public static int ReadNibbles(this Stream s, byte[] data, int offset, int count)
{
if (s == null)
{
throw new ArgumentNullException(nameof(s));
}
if (data == null)
{
throw new ArgumentNullException(nameof(data));
}
if (data.Length < offset + length)
{
throw new ArgumentOutOfRangeException(nameof(length));
}
var readBytes = s.Read(data, offset, length / 2);
for (int n = readBytes * 2 - 1, k = readBytes - 1; k >= 0; k--)
{
data[offset + n--] = data[offset + k] & 0xf;
data[offset + n--] = data[offset + k] >> 4;
}
return readBytes * 2;
}
To do the same for 12-bit integers (assuming MSB nibble ordering):
public static int Read(this Stream stream, ushort[] data, int offset, int length)
{
if (stream == null)
{
throw new ArgumentNullException(nameof(stream));
}
if (data == null)
{
throw new ArgumentNullException(nameof(data));
}
if (data.Length < offset + length)
{
throw new ArgumentOutOfRangeException(nameof(length));
}
if (length < 2)
{
throw new ArgumentOutOfRangeException(nameof(length), "Cannot read fewer than two samples at a time");
}
// we always need a multiple of two
length -= length % 2;
// 3 bytes length samples
// --------- * -------------- = N bytes
// 2 samples 1
int rawCount = (length / 2) * 3;
// This will place GC load. Something like a buffer pool or keeping
// the allocation as a field on the reader would be a good idea.
var rawData = new byte[rawCount];
int readBytes = 0;
// if the underlying stream returns an even number of bytes, we will need to try again
while (readBytes < data.Length)
{
int c = stream.Read(rawData, readBytes, rawCount - readBytes);
if (c <= 0)
{
// End of stream
break;
}
readBytes += c;
}
// unpack
int k = 0;
for (int i = 0; i < readBytes; i += 3)
{
// EOF in second byte is undefined
if (i + 1 >= readBytes)
{
throw new InvalidOperationException("Unexpected EOF");
}
data[(k++) + offset] = (ushort)((rawData[i + 0] << 4) | (rawData[i + 1] >> 4));
// EOF in third byte == only one sample
if (i + 2 < readBytes)
{
data[(k++) + offset] = (ushort)(((rawData[i + 1] & 0xf) << 8) | rawData[i + 2]);
}
}
return k;
}
The best way to do this would be to look at the source for one of the existing integral data types. For example Int16.
If you look a that type, you can see that it implements a handful of interfaces:
[Serializable]
public struct Int16 : IComparable, IFormattable, IConvertible, IComparable<short>, IEquatable<short> { /* ... */ }
The implementation of the type isn't very complicated. It has a MaxValue a MinValue, a couple of CompareTo overloads, a couple of Equals overloads, the System.Object overrides (GetHashCode, GetType, ToString (plus some overloads)), a handful of Parse and ToParse overloads and a range of IConvertible implementations.
In other places, you can find things like arithmetic, comparison and conversion operators.
BUT:
What System.Int16 has that you can't have is this:
internal short m_value;
That's a native type (16-bit integer) member that holds the value. There is no 4-bit native type. The best you are going to be able to do is have a native byte in your implementation that will hold the value. You can write accessors that constrain it to the lower 4 bits, but you can't do much more than that. If someone creates a Nibble array, it will be implemented as an array of those values. As far as I know, there's no way to inject your implementation into that array. Similarly, if someone creates some other collection (e.g., List<Nibble>), then the collection will be of instances of your type, each of which will take up 8 bits.
However
You can create specialized collection classes, NibbleArray, NibbleList, etc. C#'s syntax allows you to provide your own collection initialization implementation for a collection, your own indexing method, etc.
So, if someone does something like this:
var nyblArray = new NibbleArray(32);
nyblArray[4] = 0xd;
Then your code can, under the covers, create a 16-element byte array, set the low nibble of the third byte to 0xd.
Similarly, you can implement code to allow:
var newArray = new NibbleArray { 0x1, 0x3, 0x5, 0x7, 0x9, 0xa};
or
var nyblList = new NibbleList { 0x0, 0x2, 0xe};
A normal array will waste space, but your specialized collection classes will do what you are talking about (with the expense of some bit-twizzling).
The closest you can get to what you want is to use an indexer:
// Indexer declaration
public int this[int index]
{
// get and set accessors
}
Within the body of the indexer you can translate the index to the actual byte that contains your 4 bits.
The next thing you can do is operator overloading. You can redefine +, -, *...
I'm looking to get the last 10 bits of a 32 Bit integer,
Essentially what I did was take a number and jam it into the first 22 bits, I can extract that number back out quite nicely with
int test = Convert.ToInt32(uIActivityId >> nMethodBits); //method bits == 10 so this just pushes the last ten bits off the range
Where test results in the number I put in in the first place.
But now I'm stumped. So my question to you guys is. How do i get the last ten bits of a 32 bit integer?
First create a mask that has a 1 bit for those bits you want and a 0 bit for those you're not interested in. Then use binary & to keep only the relevant bits.
const uint mask = (1 << 10) - 1; // 0x3FF
uint last10 = input & mask;
This one is another try !!
List<bool> BitsOfInt(int input , int bitCount)
{
List<bool> outArray = new BitArray (BitConverter.GetBytes(input)).OfType<bool>().ToList();
return outArray.GetRange(outArray.Count - bitCount, bitCount);
}
int i = Convert.ToInt32("11100111101", 2);
int mask = Convert.ToInt32("1111111111", 2);
int test = Convert.ToInt32(( i&mask));
int j = Convert.ToInt32("1100111101", 2);
if (test == j)
System.Console.Out.WriteLine("it works");
public bool[] GetLast10Bits(int input) {
BitArray array = new BitArray(new []{input});
List<bool> bits = new List<bool>();
if (array.Length > 10) {
return Enumerable.Range(0, 10).Select(i => array[i]).ToArray();
}
return new bool[0];
}
Consider the following code:
static void Main(string[] args)
{
int max = 1024;
var lst = new List<int>();
for (int i = 1; i <= max; i *= 2) { lst.Add(i); }
var arr = lst.ToArray();
IterateInt(arr);
Console.WriteLine();
IterateShort(arr);
Console.WriteLine();
IterateLong(arr);
}
static void IterateInt(int[] arr)
{
Console.WriteLine("Iterating as INT ({0})", sizeof(int));
Console.WriteLine();
unsafe
{
fixed (int* src = arr)
{
var ptr = (int*)src;
var len = arr.Length;
while (len > 0)
{
Console.WriteLine(*ptr);
ptr++;
len--;
}
}
}
}
static void IterateShort(int[] arr)
{
Console.WriteLine("Iterating as SHORT ({0})", sizeof(short));
Console.WriteLine();
unsafe
{
fixed (int* src = arr)
{
var ptr = (short*)src;
var len = arr.Length;
while (len > 0)
{
Console.WriteLine(*ptr);
ptr++;
len--;
}
}
}
}
static void IterateLong(int[] arr)
{
Console.WriteLine("Iterating as LONG ({0})", sizeof(long));
Console.WriteLine();
unsafe
{
fixed (int* src = arr)
{
var ptr = (long*)src;
var len = arr.Length;
while (len > 0)
{
Console.WriteLine(*ptr);
ptr++;
len--;
}
}
}
}
Now, by no means do I have a full understanding in this arena. Nor did I have any real expectations. I'm experimenting and trying to learn. However, based off what I've read thus far, I don't understand the results I got for short and long.
It is my understanding that the original int[], when read 1 location at a time (i.e. arr + i), reads 4 bytes at a time because of the data types size and thus the value *ptr is of course the integral value.
However, with short I don't quite understand why every even iteration is 0 (or arguably odd iteration depending on your root reference). I mean I can see the pattern. Every time I iterate 4 bytes I get the real integral value in memory (just like iterating the int*), but why 0 on every other result?
Then the long iterations is even further outside my understanding; I don't even know what to say or assume there.
Results
Iterating as INT (4)
1
2
4
8
16
32
64
128
256
512
1024
Iterating as SHORT (2)
1
0
2
0
4
0
8
0
16
0
32
Iterating as LONG (8)
8589934593
34359738372
137438953488
549755813952
2199023255808
-9223372036854774784
96276819136
32088581144313929
30962698417340513
32370038935650407
23644233055928352
What is actually happening with the short and long iterations?
When you say pointer[index] it gives you sizeof(type) bytes at location pointer + index * sizeof(type). So by changing the type that you "iterate with" you change the stride.
With short you read halves of the original int's. Small ints have all zeros in their upper half.
With long you read two int's at the same time, forced into one long. At ptr[0] you are reading, for example, (1L << 32 | 2L) which is a big number.
You are still using the original Length measured in int-units, though, which is a bug. In the long-case you are reading outside the bounds of the array, in the short case you are reading too little.
In Java BigDecimal class contains values as A*pow(10,B) where A is 2's complement which non-fix bit length and B is 32bit integer.
In C# Decimal contains values as pow (–1,s) × c × pow(10,-e) where the sign s is 0 or 1, the coefficient c is given by 0 ≤ c < pow(2,96) , and the scale e is such that 0 ≤ e ≤ 28 .
And i want to convert Java BigDecimal to Something like c# Decimal in JAVA.
Can you help me .
I have some thing like this
class CS_likeDecimal
{
private int hi;// for 32bit most sinificant bit of c
private int mid;// for 32bit in the middle
private int lo;// for 32 bit the last sinificant bit
.....
public CS_likeDecimal(BigDecimal data)
{
....
}
}
In fact I found this What's the best way to represent System.Decimal in Protocol Buffers?.
it a protocol buffer for send c# decimal ,but in the protobuff-net project use this to send message between c# (but i want between c# and JAVA)
message Decimal {
optional uint64 lo = 1; // the first 64 bits of the underlying value
optional uint32 hi = 2; // the last 32 bis of the underlying value
optional sint32 signScale = 3; // the number of decimal digits, and the sign
}
Thanks,
the Decimal I use in protobuf-net is primarily intended to support the likely usage of protobuf-net being used at both ends of the pipe, which supports a fixed range. It sounds like the range of the two types in discussion is not the same, so: are not robustly compatible.
I would suggest explicitly using an alternative representation. I don't know what representations are available to Java's BigDecimal - whether there is a pragmatic byte[] version, or a string version.
If you are confident that the scale and range won't be a problem, then it should be possible to fudge between the two layouts with some bit-fiddling.
I needed to write a BigDecimal to/from .Net Decimal converter.
Using this reference:
http://msdn.microsoft.com/en-us/library/system.decimal.getbits.aspx
I wrote this code that could works:
public static byte[] BigDecimalToNetDecimal(BigDecimal paramBigDecimal) throws IllegalArgumentException
{
// .Net Decimal target
byte[] result = new byte[16];
// Unscaled absolute value
BigInteger unscaledInt = paramBigDecimal.abs().unscaledValue();
int bitLength = unscaledInt.bitLength();
if (bitLength > 96)
throw new IllegalArgumentException("BigDecimal too big for .Net Decimal");
// Byte array
byte[] unscaledBytes = unscaledInt.toByteArray();
int unscaledFirst = 0;
if (unscaledBytes[0] == 0)
unscaledFirst = 1;
// Scale
int scale = paramBigDecimal.scale();
if (scale > 28)
throw new IllegalArgumentException("BigDecimal scale exceeds .Net Decimal limit of 28");
result[1] = (byte)scale;
// Copy unscaled value to bytes 8-15
for (int pSource = unscaledBytes.length - 1, pTarget = 15; (pSource >= unscaledFirst) && (pTarget >= 4); pSource--, pTarget--)
{
result[pTarget] = unscaledBytes[pSource];
}
// Signum at byte 0
if (paramBigDecimal.signum() < 0)
result[0] = -128;
return result;
}
public static BigDecimal NetDecimalToBigDecimal(byte[] paramNetDecimal)
{
int scale = paramNetDecimal[1];
int signum = paramNetDecimal[0] >= 0 ? 1 : -1;
byte[] magnitude = new byte[12];
for (int ptr = 0; ptr < 12; ptr++) magnitude[ptr] = paramNetDecimal[ptr + 4];
BigInteger unscaledInt = new BigInteger(signum, magnitude);
return new BigDecimal(unscaledInt, scale);
}
Scenario:
I have a string of hexadecimal characters which encode 8-bit signed integers. Each two characters represent a byte which employ the leftmost (MSB) bit as the sign (rather than two's complement). I am converting these to signed ints within a loop and wondered if there's a better way to do it. There are too many conversions and I am sure there's a more efficient method that I am missing.
Current Code:
string strData = "FFC000407F"; // example input data, encodes: -127, -64, 0, 64, 127
int v;
for (int x = 0; x < strData.Length/2; x++)
{
v = HexToInt(strData.Substring(x * 2, 2));
Console.WriteLine(v); // do stuff with v
}
private int HexToInt(string _hexData)
{
string strBinary = Convert.ToString(Convert.ToInt32(_hexData, 16), 2).PadLeft(_hexData.Length * 4, '0');
int i = Convert.ToInt32(strBinary.Substring(1, 7), 2);
i = (strBinary.Substring(0, 1) == "0" ? i : -i);
return i;
}
Question:
Is there a more streamlined and direct approach to reading two hex characters and converting them to an int when they represent a signed int (-127 to 127) using the leftmost bit as the sign?
Just covert it to an int and handle the sign bit by testing the size of the converted number and masking off the sign bit.
private int HexToInt(string _hexData)
{
int number = Convert.ToInt32(_hexData, 16);
if (number >= 0x80)
return -(number & 0x7F);
return number;
}
Like this: (Tested)
(int)unchecked((sbyte)Convert.ToByte("FF", 16))
Explanation:
The unchecked cast to sbyte will perform a direct cast to a signed byte, interpreting the final bit as a sign bit.
However, it has a different range, so it won't help you.
sbyte SignAndMagnitudeToTwosComplement(byte b)
{
var isNegative = ((b & 0x80) >> 7);
return (sbyte)((b ^ 0x7F * isNegative) + isNegative);
}
Then:
sbyte ReadSignAndMagnitudeByte(string hex)
{
return SignAndMagnitudeToTwosComplement(Convert.ToByte(hex,16));
}