Comparing bits efficiently ( overlap set of x ) - c#

I want to compare a stream of bits of arbitrary length to a mask in c# and return a ratio of how many bits were the same.
The mask to check against is anywhere between 2 bits long to 8k (with 90% of the masks being 5 bits long), the input can be anywhere between 2 bits up to ~ 500k, with an average input string of 12k (but yeah, most of the time it will be comparing 5 bits with the first 5 bits of that 12k)
Now my naive implementation would be something like this:
bool[] mask = new[] { true, true, false, true };
float dendrite(bool[] input) {
int correct = 0;
for ( int i = 0; i<mask.length; i++ ) {
if ( input[i] == mask[i] )
correct++;
}
return (float)correct/(float)mask.length;
}
but I expect this is better handled (more efficient) with some kind of binary operator magic?
Anyone got any pointers?
EDIT: the datatype is not fixed at this point in my design, so if ints or bytearrays work better, I'd also be a happy camper, trying to optimize for efficiency here, the faster the computation, the better.
eg if you can make it work like this:
int[] mask = new[] { 1, 1, 0, 1 };
float dendrite(int[] input) {
int correct = 0;
for ( int i = 0; i<mask.length; i++ ) {
if ( input[i] == mask[i] )
correct++;
}
return (float)correct/(float)mask.length;
}
or this:
int mask = 13; //1101
float dendrite(int input) {
return // your magic here;
} // would return 0.75 for an input
// of 101 given ( 1100101 in binary,
// matches 3 bits of the 4 bit mask == .75
ANSWER:
I ran each proposed answer against each other and Fredou's and Marten's solution ran neck to neck but Fredou submitted the fastest leanest implementation in the end. Of course since the average result varies quite wildly between implementations I might have to revisit this post later on. :) but that's probably just me messing up in my test script. ( i hope, too late now, going to bed =)
sparse1.Cyclone
1317ms 3467107ticks 10000iterations
result: 0,7851563
sparse1.Marten
288ms 759362ticks 10000iterations
result: 0,05066964
sparse1.Fredou
216ms 568747ticks 10000iterations
result: 0,8925781
sparse1.Marten
296ms 778862ticks 10000iterations
result: 0,05066964
sparse1.Fredou
216ms 568601ticks 10000iterations
result: 0,8925781
sparse1.Marten
300ms 789901ticks 10000iterations
result: 0,05066964
sparse1.Cyclone
1314ms 3457988ticks 10000iterations
result: 0,7851563
sparse1.Fredou
207ms 546606ticks 10000iterations
result: 0,8925781
sparse1.Marten
298ms 786352ticks 10000iterations
result: 0,05066964
sparse1.Cyclone
1301ms 3422611ticks 10000iterations
result: 0,7851563
sparse1.Marten
292ms 769850ticks 10000iterations
result: 0,05066964
sparse1.Cyclone
1305ms 3433320ticks 10000iterations
result: 0,7851563
sparse1.Fredou
209ms 551178ticks 10000iterations
result: 0,8925781
( testscript copied here, if i destroyed yours modifying it lemme know. https://dotnetfiddle.net/h9nFSa )

how about this one - dotnetfiddle example
using System;
namespace ConsoleApplication1
{
public class Program
{
public static void Main(string[] args)
{
int a = Convert.ToInt32("0001101", 2);
int b = Convert.ToInt32("1100101", 2);
Console.WriteLine(dendrite(a, 4, b));
}
private static float dendrite(int mask, int len, int input)
{
return 1 - getBitCount(mask ^ (input & (int.MaxValue >> 32 - len))) / (float)len;
}
private static int getBitCount(int bits)
{
bits = bits - ((bits >> 1) & 0x55555555);
bits = (bits & 0x33333333) + ((bits >> 2) & 0x33333333);
return ((bits + (bits >> 4) & 0xf0f0f0f) * 0x1010101) >> 24;
}
}
}
64 bits one here - dotnetfiddler
using System;
namespace ConsoleApplication1
{
public class Program
{
public static void Main(string[] args)
{
// 1
ulong a = Convert.ToUInt64("0000000000000000000000000000000000000000000000000000000000001101", 2);
ulong b = Convert.ToUInt64("1110010101100101011001010110110101100101011001010110010101100101", 2);
Console.WriteLine(dendrite(a, 4, b));
}
private static float dendrite(ulong mask, int len, ulong input)
{
return 1 - getBitCount(mask ^ (input & (ulong.MaxValue >> (64 - len)))) / (float)len;
}
private static ulong getBitCount(ulong bits)
{
bits = bits - ((bits >> 1) & 0x5555555555555555UL);
bits = (bits & 0x3333333333333333UL) + ((bits >> 2) & 0x3333333333333333UL);
return unchecked(((bits + (bits >> 4)) & 0xF0F0F0F0F0F0F0FUL) * 0x101010101010101UL) >> 56;
}
}
}

I came up with this code:
static float dendrite(ulong input, ulong mask)
{
// get bits that are same (0 or 1) in input and mask
ulong samebits = mask & ~(input ^ mask);
// count number of same bits
int correct = cardinality(samebits);
// count number of bits in mask
int inmask = cardinality(mask);
// compute fraction (0.0 to 1.0)
return inmask == 0 ? 0f : correct / (float)inmask;
}
// this is a little hack to count the number of bits set to one in a 64-bit word
static int cardinality(ulong word)
{
const ulong mult = 0x0101010101010101;
const ulong mask1h = (~0UL) / 3 << 1;
const ulong mask2l = (~0UL) / 5;
const ulong mask4l = (~0UL) / 17;
word -= (mask1h & word) >> 1;
word = (word & mask2l) + ((word >> 2) & mask2l);
word += word >> 4;
word &= mask4l;
return (int)((word * mult) >> 56);
}
This will check 64-bits at a time. If you need more than that you can just split the input data into 64-bit words and compare them one by one and compute the average result.
Here's a .NET fiddle with the code and a working test case:
https://dotnetfiddle.net/5hYFtE

I would change the code to something along these lines:
// hardcoded bitmask
byte mask = 255;
float dendrite(byte input) {
int correct = 0;
// store the xor:ed result
byte xored = input ^ mask;
// loop through each bit
for(int i = 0; i < 8; i++) {
// if the bit is 0 then it was correct
if(!(xored & (1 << i)))
correct++;
}
return (float)correct/(float)mask.length;
}
The above uses a mask and input of 8 bits, but of course you could modify this to use a 4 byte integer and so on.
Not sure if this will work as expected, but it might give you some clues on how to proceed.
For example if you only would like to check the first 4 bits you could change the code to something like:
float dendrite(byte input) {
// hardcoded bitmask i.e 1101
byte mask = 13;
// number of bits to check
byte bits = 4;
int correct = 0;
// store the xor:ed result
byte xored = input ^ mask;
// loop through each bit, notice that we only checking the first 4 bits
for(int i = 0; i < bits; i++) {
// if the bit is 0 then it was correct
if(!(xored & (1 << i)))
correct++;
}
return (float)correct/(float)bits;
}
Of course it might be faster to actually use a int instead of a byte.

Related

Extract a byte into a specific bit

I have one byte of data and from there I have to extract it in the following manner.
data[0] has to extract
id(5 bit)
Sequence(2 bit)
HashAppData(1 bit)
data[1] has to extract
id(6 bit)
offset(2 bit)
Required functions are below where byte array length is 2 and I have to extract to the above manner.
public static int ParseData(byte[] data)
{
// All code goes here
}
Couldn't find any suitable solution to how do I make it. Can you please extract it?
EDIT: Fragment datatype should be in Integer
Something like this?
int id = (data[0] >> 3) & 31;
int sequence = (data[0] >> 1) & 3;
int hashAppData = data[0] & 1;
int id2 = (data[1] >> 2) & 63;
int offset = data[1] & 3;
This is how I'd do it for the first byte:
byte value = 155;
byte maskForHighest5 = 128+64+32+16+8;
byte maskForNext2 = 4+2;
byte maskForLast = 1;
byte result1 = (byte)((value & maskForHighest5) >> 3); // shift right 3 bits
byte result2 = (byte)((value & maskForNext2) >> 1); // shift right 1 bit
byte result3 = (byte)(value & maskForLast);
Working demo (.NET Fiddle):
https://dotnetfiddle.net/lNZ9TR
Code for the 2nd byte will be very similar.
If you're uncomfortable with bit manipulation, use an extension method to keep the intent of ParseData clear. This extension can be adapted for other integers by replacing both uses of byte with the necessary type.
public static int GetBitValue(this byte b, int offset, int length)
{
const int ByteWidth = sizeof(byte) * 8;
// System.Diagnostics validation - Excluded in release builds
Debug.Assert(offset >= 0);
Debug.Assert(offset < ByteWidth);
Debug.Assert(length > 0);
Debug.Assert(length <= ByteWidth);
Debug.Assert(offset + length <= ByteWidth);
var shift = ByteWidth - offset - length;
var mask = (1 << length) - 1;
return (b >> shift) & mask;
}
Usage in this case:
public static int ParseData(byte[] data)
{
{ // data[0]
var id = data[0].GetBitValue(0, 5);
var sequence = data[0].GetBitValue(5, 2);
var hashAppData = data[0].GetBitValue(7, 1);
}
{ // data[1]
var id = data[1].GetBitValue(0, 6);
var offset = data[1].GetBitValue(6, 2);
}
// ... return necessary data
}

generate a random biginteger between two values c#

I use the biginteger class whose source , and I want to generate a biginteger number between two values min and max randomly so i used this method found on stackoverflow :
public BigInteger getRandom(int n)
{
var rng = new RNGCryptoServiceProvider();
byte[] bytes = new byte[n / 8];
rng.GetBytes(bytes);
return new BigInteger(bytes);
}
But I can not generate numbers between min and max because the parameters of this function represent the number of bits, can someone help me, thank you in advance!
min and max are also a biginteger.
Try this one:
// max exclusive (not included!)
public static BigInteger GetRandom(RNGCryptoServiceProvider rng, BigInteger min, BigInteger max)
{
// shift to 0...max-min
BigInteger max2 = max - min;
int bits = max2.bitCount();
// 1 bit for sign (that we will ignore, we only want positive numbers!)
bits++;
// we round to the next byte
int bytes = (bits + 7) / 8;
int uselessBits = bytes * 8 - bits;
var bytes2 = new byte[bytes];
while (true)
{
rng.GetBytes(bytes2);
// The maximum number of useless bits is 1 (sign) + 7 (rounding) == 8
if (uselessBits == 8)
{
// and it is exactly one byte!
bytes2[0] = 0;
}
else
{
// Remove the sign and the useless bits
for (int i = 0; i < uselessBits; i++)
{
//Equivalent to
//byte bit = (byte)(1 << (7 - (i % 8)));
byte bit = (byte)(1 << (7 & (~i)));
//Equivalent to
//bytes2[i / 8] &= (byte)~bit;
bytes2[i >> 3] &= (byte)~bit;
}
}
var bi = new BigInteger(bytes2);
// If it is too much big, then retry!
if (bi >= max2)
{
continue;
}
// unshift the number
bi += min;
return bi;
}
}
There are some comments that explain a little how it work.

Detecting overflow in fixed-point multiplication

Short version: how can I detect overflow using the fixed-point multiplication described here but for a signed type?
Long version:
I still have some overflow issues with my Q31.32 fixed point type. To make it easier to work out examples on paper, I've made a much smaller type using the same algorithm, a Q3.4 based on sbyte. I figure that if I can work out all the kinks for a Q3.4 type, the same logic should apply for a Q31.32 one.
Note that I could very easily implement Q3.4 multiplication by performing it on a 16-bit integer, but I'm doing as if that didn't exist, because for the Q31.32 I'd need a 128-bit integer which doesn't exist (and BigInteger is too slow).
I want my multiplication to handle overflow by saturation, that is when overflow happens, the result is the highest or smallest value that can be represented depending on the sign of the operands.
This is basically how the type is represented:
struct Fix8 {
sbyte m_rawValue;
public static readonly Fix8 One = new Fix8(1 << 4);
public static readonly Fix8 MinValue = new Fix8(sbyte.MinValue);
public static readonly Fix8 MaxValue = new Fix8(sbyte.MaxValue);
Fix8(sbyte value) {
m_rawValue = value;
}
public static explicit operator decimal(Fix8 value) {
return (decimal)value.m_rawValue / One.m_rawValue;
}
public static explicit operator Fix8(decimal value) {
var nearestExact = Math.Round(value * 16m) * 0.0625m;
return new Fix8((sbyte)(nearestExact * One.m_rawValue));
}
}
And this is how I currently handle multiplication:
public static Fix8 operator *(Fix8 x, Fix8 y) {
sbyte xl = x.m_rawValue;
sbyte yl = y.m_rawValue;
// split x and y into their highest and lowest 4 bits
byte xlo = (byte)(xl & 0x0F);
sbyte xhi = (sbyte)(xl >> 4);
byte ylo = (byte)(yl & 0x0F);
sbyte yhi = (sbyte)(yl >> 4);
// perform cross-multiplications
byte lolo = (byte)(xlo * ylo);
sbyte lohi = (sbyte)((sbyte)xlo * yhi);
sbyte hilo = (sbyte)(xhi * (sbyte)ylo);
sbyte hihi = (sbyte)(xhi * yhi);
// shift results as appropriate
byte loResult = (byte)(lolo >> 4);
sbyte midResult1 = lohi;
sbyte midResult2 = hilo;
sbyte hiResult = (sbyte)(hihi << 4);
// add everything
sbyte sum = (sbyte)((sbyte)loResult + midResult1 + midResult2 + hiResult);
// if the top 4 bits of hihi (unused in the result) are neither all 0s or 1s,
// then this means the result overflowed.
sbyte topCarry = (sbyte)(hihi >> 4);
bool opSignsEqual = ((xl ^ yl) & sbyte.MinValue) == 0;
if (topCarry != 0 && topCarry != -1) {
return opSignsEqual ? MaxValue : MinValue;
}
// if signs of operands are equal and sign of result is negative,
// then multiplication overflowed upwards
// the reverse is also true
if (opSignsEqual) {
if (sum < 0) {
return MaxValue;
}
}
else {
if (sum > 0) {
return MinValue;
}
}
return new Fix8(sum);
}
This gives result accurate within the precision of the type and handles most overflow cases. It doesn't however handle these ones, for example:
Failed -8 * 2 : expected -8 but got 0
Failed 3.5 * 5 : expected 7,9375 but got 1,5
Let's work out how the multiplication happens for the first one.
-8 and 2 are represented as x = 0x80 and y = 0x20.
xlo = 0x80 & 0x0F = 0x00
xhi = 0x80 >> 4 = 0xf8
ylo = 0x20 & 0x0F = 0x00
yhi = 0x20 >> 4 = 0x02
lolo = xlo * ylo = 0x00
lohi = xlo * yhi = 0x00
hilo = xhi * ylo = 0x00
hihi = xhi * yhi = 0xf0
The sum is obviously 0 as all terms are 0 save for hihi, but only the lowest 4 bits of hihi are used in the final sum.
My usual overflow detection magic doesn't work here: the result is zero so the sign of the result is meaningless (e.g. 0.0625 * -0.0625 == 0 (by rounding down), 0 is positive yet signs of operands differ); also the high bits of hihi are 1111 which often happens even when there's no overflow.
Basically I don't know how to detect that overflow happened here. Is there a more general method?
You should examine hihi to see whether it contains any relevant bits outside the range of the result. You can also compare the highest bit of the result with the corresponding bit in hihi to see whether a carry propagated that far, and if it did (i.e. the bit changed), whether that indicates an overflow (i.e. the bit changed in the wrong direction). All of this would probably be easier to formulate if you were using one's complement notation, and treat the sign bits separately. But in that case, your example of −8 would be pointless.
Looking at your example, you have hihi = 0xf0.
hihi 11110000
result ±###.####
So in this case, if there were no overflow in hihi alone, then the first 5 bits would all be the same, and the sign of the result would match the sign of hihi. This is not the case here. You can check this using
if ((hihi & 0x08) * 0x1f != (hihi & 0xf8))
handle_overflow();
The carry into hihi can probably be detected most easily by adding the result one summand at a time and performing common overflow detection after each step. Haven't got a nice piece of code for that ready.
This took me a long time, but I eventually figured everything out. This code is tested to work for every possible combination of x and y in the range allowed by sbyte. Here is the commented code:
static sbyte AddOverflowHelper(sbyte x, sbyte y, ref bool overflow) {
var sum = (sbyte)(x + y);
// x + y overflows if sign(x) ^ sign(y) != sign(sum)
overflow |= ((x ^ y ^ sum) & sbyte.MinValue) != 0;
return sum;
}
/// <summary>
/// Multiplies two Fix8 numbers.
/// Deals with overflow by saturation.
/// </summary>
public static Fix8 operator *(Fix8 x, Fix8 y) {
// Using the cross-multiplication algorithm, for learning purposes.
// It would be both trivial and much faster to use an Int16, but this technique
// won't work for a Fix64, since there's no Int128 or equivalent (and BigInteger is too slow).
sbyte xl = x.m_rawValue;
sbyte yl = y.m_rawValue;
byte xlo = (byte)(xl & 0x0F);
sbyte xhi = (sbyte)(xl >> 4);
byte ylo = (byte)(yl & 0x0F);
sbyte yhi = (sbyte)(yl >> 4);
byte lolo = (byte)(xlo * ylo);
sbyte lohi = (sbyte)((sbyte)xlo * yhi);
sbyte hilo = (sbyte)(xhi * (sbyte)ylo);
sbyte hihi = (sbyte)(xhi * yhi);
byte loResult = (byte)(lolo >> 4);
sbyte midResult1 = lohi;
sbyte midResult2 = hilo;
sbyte hiResult = (sbyte)(hihi << 4);
bool overflow = false;
// Check for overflow at each step of the sum, if it happens overflow will be true
sbyte sum = AddOverflowHelper((sbyte)loResult, midResult1, ref overflow);
sum = AddOverflowHelper(sum, midResult2, ref overflow);
sum = AddOverflowHelper(sum, hiResult, ref overflow);
bool opSignsEqual = ((xl ^ yl) & sbyte.MinValue) == 0;
// if signs of operands are equal and sign of result is negative,
// then multiplication overflowed positively
// the reverse is also true
if (opSignsEqual) {
if (sum < 0 || (overflow && xl > 0)) {
return MaxValue;
}
}
else {
if (sum > 0) {
return MinValue;
}
// If signs differ, both operands' magnitudes are greater than 1,
// and the result is greater than the negative operand, then there was negative overflow.
sbyte posOp, negOp;
if (xl > yl) {
posOp = xl;
negOp = yl;
}
else {
posOp = yl;
negOp = xl;
}
if (sum > negOp && negOp < -(1 << 4) && posOp > (1 << 4)) {
return MinValue;
}
}
// if the top 4 bits of hihi (unused in the result) are neither all 0s nor 1s,
// then this means the result overflowed.
sbyte topCarry = (sbyte)(hihi >> 4);
// -17 (-1.0625) is a problematic value which never causes overflow but messes up the carry bits
if (topCarry != 0 && topCarry != -1 && xl != -17 && yl != -17) {
return opSignsEqual ? MaxValue : MinValue;
}
// Round up if necessary, but don't overflow
var lowCarry = (byte)(lolo << 4);
if (lowCarry >= 0x80 && sum < sbyte.MaxValue) {
++sum;
}
return new Fix8(sum);
}
I'm putting all this together into a properly unit tested fixed-point math library for .NET, which will be available here: https://github.com/asik/FixedMath.Net

Perform signed arithmetic on numbers defined as bit ranges of unsigned bytes

I have two bytes. I need to turn them into two integers where the first 12 bits make one int and the last 4 make the other. I figure i can && the 2nd byte with 0x0f to get the 4 bits, but I'm not sure how to make that into a byte with the correct sign.
update:
just to clarify I have 2 bytes
byte1 = 0xab
byte2 = 0xcd
and I need to do something like this with it
var value = 0xabc * 10 ^ 0xd;
sorry for the confusion.
thanks for all of the help.
int a = 10;
int a1 = a&0x000F;
int a2 = a&0xFFF0;
try to use this code
For kicks:
public static partial class Levitate
{
public static Tuple<int, int> UnPack(this int value)
{
uint sign = (uint)value & 0x80000000;
int small = ((int)sign >> 28) | (value & 0x0F);
int big = value & 0xFFF0;
return new Tuple<int, int>(small, big);
}
}
int a = 10;
a.UnPack();
Ok, let's try this again knowing what we're shooting for. I tried the following out in VS2008 and it seems to work fine, that is, both outOne and outTwo = -1 at the end. Is that what you're looking for?
byte b1 = 0xff;
byte b2 = 0xff;
ushort total = (ushort)((b1 << 8) + b2);
short outOne = (short)((short)(total & 0xFFF0) >> 4);
sbyte outTwo = (sbyte)((sbyte)((total & 0xF) << 4) >> 4);
Assuming you have the following to bytes:
byte a = 0xab;
byte b = 0xcd;
and consider 0xab the first 8 bits and 0xcd the second 8 bits, or 0xabc the first 12 bits and 0xd the last four bits. Then you can get the these bits as follows;
int x = (a << 4) | (b >> 4); // x == 0x0abc
int y = b & 0x0f; // y == 0x000d
edited to take into account clarification of "signing" rules:
public void unpack( byte[] octets , out int hiNibbles , out int loNibble )
{
if ( octets == null ) throw new ArgumentNullException("octets");
if ( octets.Length != 2 ) throw new ArgumentException("octets") ;
int value = (int) BitConverter.ToInt16( octets , 0 ) ;
// since the value is signed, right shifts sign-extend
hiNibbles = value >> 4 ;
loNibble = ( value << 28 ) >> 28 ;
return ;
}

Converting a int to a BCD byte array

I want to convert an int to a byte[2] array using BCD.
The int in question will come from DateTime representing the Year and must be converted to two bytes.
Is there any pre-made function that does this or can you give me a simple way of doing this?
example:
int year = 2010
would output:
byte[2]{0x20, 0x10};
static byte[] Year2Bcd(int year) {
if (year < 0 || year > 9999) throw new ArgumentException();
int bcd = 0;
for (int digit = 0; digit < 4; ++digit) {
int nibble = year % 10;
bcd |= nibble << (digit * 4);
year /= 10;
}
return new byte[] { (byte)((bcd >> 8) & 0xff), (byte)(bcd & 0xff) };
}
Beware that you asked for a big-endian result, that's a bit unusual.
Use this method.
public static byte[] ToBcd(int value){
if(value<0 || value>99999999)
throw new ArgumentOutOfRangeException("value");
byte[] ret=new byte[4];
for(int i=0;i<4;i++){
ret[i]=(byte)(value%10);
value/=10;
ret[i]|=(byte)((value%10)<<4);
value/=10;
}
return ret;
}
This is essentially how it works.
If the value is less than 0 or greater than 99999999, the value won't fit in four bytes. More formally, if the value is less than 0 or is 10^(n*2) or greater, where n is the number of bytes, the value won't fit in n bytes.
For each byte:
Set that byte to the remainder of the value-divided-by-10 to the byte. (This will place the last digit in the low nibble [half-byte] of the current byte.)
Divide the value by 10.
Add 16 times the remainder of the value-divided-by-10 to the byte. (This will place the now-last digit in the high nibble of the current byte.)
Divide the value by 10.
(One optimization is to set every byte to 0 beforehand -- which is implicitly done by .NET when it allocates a new array -- and to stop iterating when the value reaches 0. This latter optimization is not done in the code above, for simplicity. Also, if available, some compilers or assemblers offer a divide/remainder routine that allows retrieving the quotient and remainder in one division step, an optimization which is not usually necessary though.)
Here's a terrible brute-force version. I'm sure there's a better way than this, but it ought to work anyway.
int digitOne = year / 1000;
int digitTwo = (year - digitOne * 1000) / 100;
int digitThree = (year - digitOne * 1000 - digitTwo * 100) / 10;
int digitFour = year - digitOne * 1000 - digitTwo * 100 - digitThree * 10;
byte[] bcdYear = new byte[] { digitOne << 4 | digitTwo, digitThree << 4 | digitFour };
The sad part about it is that fast binary to BCD conversions are built into the x86 microprocessor architecture, if you could get at them!
Here is a slightly cleaner version then Jeffrey's
static byte[] IntToBCD(int input)
{
if (input > 9999 || input < 0)
throw new ArgumentOutOfRangeException("input");
int thousands = input / 1000;
int hundreds = (input -= thousands * 1000) / 100;
int tens = (input -= hundreds * 100) / 10;
int ones = (input -= tens * 10);
byte[] bcd = new byte[] {
(byte)(thousands << 4 | hundreds),
(byte)(tens << 4 | ones)
};
return bcd;
}
maybe a simple parse function containing this loop
i=0;
while (id>0)
{
twodigits=id%100; //need 2 digits per byte
arr[i]=twodigits%10 + twodigits/10*16; //first digit on first 4 bits second digit shifted with 4 bits
id/=100;
i++;
}
More common solution
private IEnumerable<Byte> GetBytes(Decimal value)
{
Byte currentByte = 0;
Boolean odd = true;
while (value > 0)
{
if (odd)
currentByte = 0;
Decimal rest = value % 10;
value = (value-rest)/10;
currentByte |= (Byte)(odd ? (Byte)rest : (Byte)((Byte)rest << 4));
if(!odd)
yield return currentByte;
odd = !odd;
}
if(!odd)
yield return currentByte;
}
Same version as Peter O. but in VB.NET
Public Shared Function ToBcd(ByVal pValue As Integer) As Byte()
If pValue < 0 OrElse pValue > 99999999 Then Throw New ArgumentOutOfRangeException("value")
Dim ret As Byte() = New Byte(3) {} 'All bytes are init with 0's
For i As Integer = 0 To 3
ret(i) = CByte(pValue Mod 10)
pValue = Math.Floor(pValue / 10.0)
ret(i) = ret(i) Or CByte((pValue Mod 10) << 4)
pValue = Math.Floor(pValue / 10.0)
If pValue = 0 Then Exit For
Next
Return ret
End Function
The trick here is to be aware that simply using pValue /= 10 will round the value so if for instance the argument is "16", the first part of the byte will be correct, but the result of the division will be 2 (as 1.6 will be rounded up). Therefore I use the Math.Floor method.
I made a generic routine posted at IntToByteArray that you could use like:
var yearInBytes = ConvertBigIntToBcd(2010, 2);
static byte[] IntToBCD(int input) {
byte[] bcd = new byte[] {
(byte)(input>> 8),
(byte)(input& 0x00FF)
};
return bcd;
}

Categories

Resources