I'm looking to get the last 10 bits of a 32 Bit integer,
Essentially what I did was take a number and jam it into the first 22 bits, I can extract that number back out quite nicely with
int test = Convert.ToInt32(uIActivityId >> nMethodBits); //method bits == 10 so this just pushes the last ten bits off the range
Where test results in the number I put in in the first place.
But now I'm stumped. So my question to you guys is. How do i get the last ten bits of a 32 bit integer?
First create a mask that has a 1 bit for those bits you want and a 0 bit for those you're not interested in. Then use binary & to keep only the relevant bits.
const uint mask = (1 << 10) - 1; // 0x3FF
uint last10 = input & mask;
This one is another try !!
List<bool> BitsOfInt(int input , int bitCount)
{
List<bool> outArray = new BitArray (BitConverter.GetBytes(input)).OfType<bool>().ToList();
return outArray.GetRange(outArray.Count - bitCount, bitCount);
}
int i = Convert.ToInt32("11100111101", 2);
int mask = Convert.ToInt32("1111111111", 2);
int test = Convert.ToInt32(( i&mask));
int j = Convert.ToInt32("1100111101", 2);
if (test == j)
System.Console.Out.WriteLine("it works");
public bool[] GetLast10Bits(int input) {
BitArray array = new BitArray(new []{input});
List<bool> bits = new List<bool>();
if (array.Length > 10) {
return Enumerable.Range(0, 10).Select(i => array[i]).ToArray();
}
return new bool[0];
}
Related
I need to read 8 bool values and create a Byte from it, How is this done?
rather than hardcoding the following 1's and 0's - how can i create that binary value from a series of Boolean values in c#?
byte myValue = 0b001_0000;
There's many ways of doing it, for example to build it from an array:
bool[] values = ...;
byte result = 0;
for(int i = values.Length - 1; i >= 0; --i) // assuming you store them "in reverse"
result = result | (values[i] << (values.Length - 1 - i));
My solution with Linq:
public static byte CreateByte(bool[] bits)
{
if (bits.Length > 8)
{
throw new ArgumentOutOfRangeException();
}
return (byte)bits.Reverse().Select((val, i) => Convert.ToByte(val) << i).Sum();
}
The call to Reverse() is optional and dependent on if you want index 0 to be the LSB (without Reverse) or the MSB (with Reverse)
var values = new bool[8];
values [7] = true;
byte result = 0;
for (var i = 0; i < 8; i++)
{
//edited to bit shifting because of community complains :D
if (values [i]) result |= (byte)(1 << i);
}
// result => 128
This might be absolutely overkill, but I felt like playing around with SIMD. It could've probably been written even better but I don't know SIMD all that well.
If you want reverse bit order to what this generates, just remove the shuffling part from the SIMD approach and change (7 - i) to just i
For those not familiar with SIMD, this approach is about 3 times faster than a normal for loop.
public static byte ByteFrom8Bools(ReadOnlySpan<bool> bools)
{
if (bools.Length < 8)
Throw();
static void Throw() // Throwing in a separate method helps JIT produce better code, or so I've heard
{
throw new ArgumentException("Not enough booleans provided");
}
// these are JIT compile time constants, only one of the branches will be compiled
// depending on the CPU running this code, eliminating the branch entirely
if(Sse2.IsSupported && Ssse3.IsSupported)
{
// copy out the 64 bits all at once
ref readonly bool b = ref bools[0];
ref bool refBool = ref Unsafe.AsRef(b);
ulong ulongBools = Unsafe.As<bool, ulong>(ref refBool);
// load our 64 bits into a vector register
Vector128<byte> vector = Vector128.CreateScalarUnsafe(ulongBools).AsByte();
// this is just to propagate the 1 set bit in true bools to the most significant bit
Vector128<byte> allTrue = Vector128.Create((byte)1);
Vector128<byte> compared = Sse2.CompareEqual(vector, allTrue);
// reverse the bytes we care about, leave the rest in their place
Vector128<byte> shuffleMask = Vector128.Create((byte)7, 6, 5, 4, 3, 2, 1, 0, 8, 9, 10, 11, 12, 13, 14, 15);
Vector128<byte> shuffled = Ssse3.Shuffle(compared, shuffleMask);
// move the most significant bit of each byte into a bit of int
int mask = Sse2.MoveMask(shuffled);
// returning byte = returning the least significant byte from int
return (byte)mask;
}
else
{
// fall back to a more generic algorithm if there aren't the correct instructions on the CPU
byte bits = 0;
for (int i = 0; i < 8; i++)
{
bool b = bools[i];
bits |= (byte)(Unsafe.As<bool, byte>(ref b) << (7 - i));
}
return bits;
}
}
In Java BigDecimal class contains values as A*pow(10,B) where A is 2's complement which non-fix bit length and B is 32bit integer.
In C# Decimal contains values as pow (–1,s) × c × pow(10,-e) where the sign s is 0 or 1, the coefficient c is given by 0 ≤ c < pow(2,96) , and the scale e is such that 0 ≤ e ≤ 28 .
And i want to convert Java BigDecimal to Something like c# Decimal in JAVA.
Can you help me .
I have some thing like this
class CS_likeDecimal
{
private int hi;// for 32bit most sinificant bit of c
private int mid;// for 32bit in the middle
private int lo;// for 32 bit the last sinificant bit
.....
public CS_likeDecimal(BigDecimal data)
{
....
}
}
In fact I found this What's the best way to represent System.Decimal in Protocol Buffers?.
it a protocol buffer for send c# decimal ,but in the protobuff-net project use this to send message between c# (but i want between c# and JAVA)
message Decimal {
optional uint64 lo = 1; // the first 64 bits of the underlying value
optional uint32 hi = 2; // the last 32 bis of the underlying value
optional sint32 signScale = 3; // the number of decimal digits, and the sign
}
Thanks,
the Decimal I use in protobuf-net is primarily intended to support the likely usage of protobuf-net being used at both ends of the pipe, which supports a fixed range. It sounds like the range of the two types in discussion is not the same, so: are not robustly compatible.
I would suggest explicitly using an alternative representation. I don't know what representations are available to Java's BigDecimal - whether there is a pragmatic byte[] version, or a string version.
If you are confident that the scale and range won't be a problem, then it should be possible to fudge between the two layouts with some bit-fiddling.
I needed to write a BigDecimal to/from .Net Decimal converter.
Using this reference:
http://msdn.microsoft.com/en-us/library/system.decimal.getbits.aspx
I wrote this code that could works:
public static byte[] BigDecimalToNetDecimal(BigDecimal paramBigDecimal) throws IllegalArgumentException
{
// .Net Decimal target
byte[] result = new byte[16];
// Unscaled absolute value
BigInteger unscaledInt = paramBigDecimal.abs().unscaledValue();
int bitLength = unscaledInt.bitLength();
if (bitLength > 96)
throw new IllegalArgumentException("BigDecimal too big for .Net Decimal");
// Byte array
byte[] unscaledBytes = unscaledInt.toByteArray();
int unscaledFirst = 0;
if (unscaledBytes[0] == 0)
unscaledFirst = 1;
// Scale
int scale = paramBigDecimal.scale();
if (scale > 28)
throw new IllegalArgumentException("BigDecimal scale exceeds .Net Decimal limit of 28");
result[1] = (byte)scale;
// Copy unscaled value to bytes 8-15
for (int pSource = unscaledBytes.length - 1, pTarget = 15; (pSource >= unscaledFirst) && (pTarget >= 4); pSource--, pTarget--)
{
result[pTarget] = unscaledBytes[pSource];
}
// Signum at byte 0
if (paramBigDecimal.signum() < 0)
result[0] = -128;
return result;
}
public static BigDecimal NetDecimalToBigDecimal(byte[] paramNetDecimal)
{
int scale = paramNetDecimal[1];
int signum = paramNetDecimal[0] >= 0 ? 1 : -1;
byte[] magnitude = new byte[12];
for (int ptr = 0; ptr < 12; ptr++) magnitude[ptr] = paramNetDecimal[ptr + 4];
BigInteger unscaledInt = new BigInteger(signum, magnitude);
return new BigDecimal(unscaledInt, scale);
}
In C I would do this
int number = 3510;
char upper = number >> 8;
char lower = number && 8;
SendByte(upper);
SendByte(lower);
Where upper and lower would both = 54
In C# I am doing this:
int number = Convert.ToInt16("3510");
byte upper = byte(number >> 8);
byte lower = byte(number & 8);
char upperc = Convert.ToChar(upper);
char lowerc = Convert.ToChar(lower);
data = "GETDM" + upperc + lowerc;
comport.Write(data);
However in the debugger number = 3510, upper = 13 and lower = 0
this makes no sense, if I change the code to >> 6 upper = 54 which is absolutely strange.
Basically I just want to get the upper and lower byte from the 16 bit number, and send it out the com port after "GETDM"
How can I do this? It is so simple in C, but in C# I am completely stumped.
Your masking is incorrect - you should be masking against 255 (0xff) instead of 8. Shifting works in terms of "bits to shift by" whereas bitwise and/or work against the value to mask against... so if you want to only keep the bottom 8 bits, you need a mask which just has the bottom 8 bits set - i.e. 255.
Note that if you're trying to split a number into two bytes, it should really be a short or ushort to start with, not an int (which has four bytes).
ushort number = Convert.ToUInt16("3510");
byte upper = (byte) (number >> 8);
byte lower = (byte) (number & 0xff);
Note that I've used ushort here instead of byte as bitwise arithmetic is easier to think about when you don't need to worry about sign extension. It wouldn't actually matter in this case due to the way the narrowing conversion to byte works, but it's the kind of thing you should be thinking about.
You probably want to and it with 0x00FF
byte lower = Convert.ToByte(number & 0x00FF);
Full example:
ushort number = Convert.ToUInt16("3510");
byte upper = Convert.ToByte(number >> 8);
byte lower = Convert.ToByte(number & 0x00FF);
char upperc = Convert.ToChar(upper);
char lowerc = Convert.ToChar(lower);
data = "GETDM" + upperc + lowerc;
Even if the accepted answer fits the question, I consider it incomplete due to the simple fact that the question contains int and not short in header and it is misleading in search results, and as we know Int32 in C# has 32 bits and thus 4 bytes. I will post here an example that will be useful in the case of Int32 use. In the case of an Int32 we have:
LowWordLowByte
LowWordHighByte
HighWordLowByte
HighWordHighByte.
And as such, I have created the following method for converting the Int32 value into a little endian Hex string, in which every byte is separated from the others by a Whitespace. This is useful when you transmit data and want the receiver to do the processing faster, he can just Split(" ") and get the bytes represented as standalone hex strings.
public static String IntToLittleEndianWhitespacedHexString(int pValue, uint pSize)
{
String result = String.Empty;
pSize = pSize < 4 ? pSize : 4;
byte tmpByte = 0x00;
for (int i = 0; i < pSize; i++)
{
tmpByte = (byte)((pValue >> i * 8) & 0xFF);
result += tmpByte.ToString("X2") + " ";
}
return result.TrimEnd(' ');
}
Usage:
String value1 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x927C, 4);
String value2 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x3FFFF, 4);
String value3 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x927C, 2);
String value4 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x3FFFF, 1);
The result is:
7C 92 00 00
FF FF 03 00
7C 92
FF.
If it is hard to understand the method which I created, then the following might be a more comprehensible one:
public static String IntToLittleEndianWhitespacedHexString(int pValue)
{
String result = String.Empty;
byte lowWordLowByte = (byte)(pValue & 0xFF);
byte lowWordHighByte = (byte)((pValue >> 8) & 0xFF);
byte highWordLowByte = (byte)((pValue >> 16) & 0xFF);
byte highWordHighByte = (byte)((pValue >> 24) & 0xFF);
result = lowWordLowByte.ToString("X2") + " " +
lowWordHighByte.ToString("X2") + " " +
highWordLowByte.ToString("X2") + " " +
highWordHighByte.ToString("X2");
return result;
}
Remarks:
Of course insteand of uint pSize there can be an enum specifying Byte, Word, DoubleWord
Instead of converting to hex string and creating the little endian string, you can convert to chars and do whatever you want to do.
Hope this will help someone!
Shouldn't it be:
byte lower = (byte) ( number & 0xFF );
To be a little more creative
[System.Runtime.InteropServices.StructLayout( System.Runtime.InteropServices.LayoutKind.Explicit )]
public struct IntToBytes {
[System.Runtime.InteropServices.FieldOffset(0)]
public int Int32;
[System.Runtime.InteropServices.FieldOffset(0)]
public byte First;
[System.Runtime.InteropServices.FieldOffset(1)]
public byte Second;
[System.Runtime.InteropServices.FieldOffset(2)]
public byte Third;
[System.Runtime.InteropServices.FieldOffset(3)]
public byte Fourth;
}
Scenario:
I have a string of hexadecimal characters which encode 8-bit signed integers. Each two characters represent a byte which employ the leftmost (MSB) bit as the sign (rather than two's complement). I am converting these to signed ints within a loop and wondered if there's a better way to do it. There are too many conversions and I am sure there's a more efficient method that I am missing.
Current Code:
string strData = "FFC000407F"; // example input data, encodes: -127, -64, 0, 64, 127
int v;
for (int x = 0; x < strData.Length/2; x++)
{
v = HexToInt(strData.Substring(x * 2, 2));
Console.WriteLine(v); // do stuff with v
}
private int HexToInt(string _hexData)
{
string strBinary = Convert.ToString(Convert.ToInt32(_hexData, 16), 2).PadLeft(_hexData.Length * 4, '0');
int i = Convert.ToInt32(strBinary.Substring(1, 7), 2);
i = (strBinary.Substring(0, 1) == "0" ? i : -i);
return i;
}
Question:
Is there a more streamlined and direct approach to reading two hex characters and converting them to an int when they represent a signed int (-127 to 127) using the leftmost bit as the sign?
Just covert it to an int and handle the sign bit by testing the size of the converted number and masking off the sign bit.
private int HexToInt(string _hexData)
{
int number = Convert.ToInt32(_hexData, 16);
if (number >= 0x80)
return -(number & 0x7F);
return number;
}
Like this: (Tested)
(int)unchecked((sbyte)Convert.ToByte("FF", 16))
Explanation:
The unchecked cast to sbyte will perform a direct cast to a signed byte, interpreting the final bit as a sign bit.
However, it has a different range, so it won't help you.
sbyte SignAndMagnitudeToTwosComplement(byte b)
{
var isNegative = ((b & 0x80) >> 7);
return (sbyte)((b ^ 0x7F * isNegative) + isNegative);
}
Then:
sbyte ReadSignAndMagnitudeByte(string hex)
{
return SignAndMagnitudeToTwosComplement(Convert.ToByte(hex,16));
}
Interestingly, I can find implementations for the Internet Checksum in almost every language except C#. Does anyone have an implementation to share?
Remember, the internet protocol specifies that:
"The checksum field is the 16 bit one's complement of the one's
complement sum of all 16 bit words in the header. For purposes of
computing the checksum, the value of the checksum field is zero."
More explanation can be found from Dr. Math.
There are some efficiency pointers available, but that's not really a large concern for me at this point.
Please include your tests! (Edit: Valid comment regarding testing someone else's code - but I am going off of the protocol and don't have test vectors of my own and would rather unit test it than put into production to see if it matches what is currently being used! ;-)
Edit: Here are some unit tests that I came up with. They test an extension method which iterates through the entire byte collection. Please comment if you find fault in the tests.
[TestMethod()]
public void InternetChecksum_SimplestValidValue_ShouldMatch()
{
IEnumerable<byte> value = new byte[1]; // should work for any-length array of zeros
ushort expected = 0xFFFF;
ushort actual = value.InternetChecksum();
Assert.AreEqual(expected, actual);
}
[TestMethod()]
public void InternetChecksum_ValidSingleByteExtreme_ShouldMatch()
{
IEnumerable<byte> value = new byte[]{0xFF};
ushort expected = 0xFF;
ushort actual = value.InternetChecksum();
Assert.AreEqual(expected, actual);
}
[TestMethod()]
public void InternetChecksum_ValidMultiByteExtrema_ShouldMatch()
{
IEnumerable<byte> value = new byte[] { 0x00, 0xFF };
ushort expected = 0xFF00;
ushort actual = value.InternetChecksum();
Assert.AreEqual(expected, actual);
}
I knew I had this one stored away somewhere...
http://cyb3rspy.wordpress.com/2008/03/27/ip-header-checksum-function-in-c/
Well, I dug up an implementation from an old code base and it passes the tests I specified in the question, so here it is (as an extension method):
public static ushort InternetChecksum(this IEnumerable<byte> value)
{
byte[] buffer = value.ToArray();
int length = buffer.Length;
int i = 0;
UInt32 sum = 0;
UInt32 data = 0;
while (length > 1)
{
data = 0;
data = (UInt32)(
((UInt32)(buffer[i]) << 8)
|
((UInt32)(buffer[i + 1]) & 0xFF)
);
sum += data;
if ((sum & 0xFFFF0000) > 0)
{
sum = sum & 0xFFFF;
sum += 1;
}
i += 2;
length -= 2;
}
if (length > 0)
{
sum += (UInt32)(buffer[i] << 8);
//sum += (UInt32)(buffer[i]);
if ((sum & 0xFFFF0000) > 0)
{
sum = sum & 0xFFFF;
sum += 1;
}
}
sum = ~sum;
sum = sum & 0xFFFF;
return (UInt16)sum;
}
I have made an implementation of the IPv4 header checksum calculation, as defined in RFC 791.
Extension Methods
public static ushort GetInternetChecksum(this ReadOnlySpan<byte> bytes)
=> CalculateChecksum(bytes, ignoreHeaderChecksum: true);
public static bool IsValidChecksum(this ReadOnlySpan<byte> bytes)
// Should equal zero (valid)
=> CalculateChecksum(bytes, ignoreHeaderChecksum: false) == 0;
The Checksum Calculation
using System.Buffers.Binary;
private static ushort CalculateChecksum(ReadOnlySpan<byte> bytes, bool ignoreHeaderChecksum)
{
ushort checksum = 0;
for (int i = 0; i <= 18; i += 2)
{
// i = 0 e.g. [0..2] Version and Internal Header Length
// i = 2 e.g. [2..4] Total Length
// i = 4 e.g. [4..6] Identification
// i = 6 e.g. [6..8] Flags and Fragmentation Offset
// i = 8 e.g. [8..10] TTL and Protocol
// i = 10 e.g. [10..12] Header Checksum
// i = 12 e.g. [12..14] Source Address #1
// i = 14 e.g. [14..16] Source Address #2
// i = 16 e.g. [16..18] Destination Address #1
// i = 18 e.g. [18..20] Destination Address #2
if (ignoreHeaderChecksum && i == 10) continue;
ushort value = BinaryPrimitives.ReadUInt16BigEndian(bytes[i..(i + 2)]);
// Each time a carry occurs, we must add a 1 to the sum
if (checksum + value > ushort.MaxValue)
{
checksum++;
}
checksum += value;
}
// One’s complement
return (ushort)~checksum;
}