How do I convert DateTime(yyyyMMddhhmm) to a packed bcd (size 6) representation on c# ?
using System;
namespace Exercise
{
internal class Program
{
private static void Main(string[] args)
{
byte res = to_bcd(12);
}
private static byte to_bcd(int n)
{
// extract each digit from the input number n
byte d1 = Convert.ToByte(n/10);
byte d2 = Convert.ToByte(n%10);
// combine the decimal digits into a BCD number
return Convert.ToByte((d1 << 4) | d2);
}
}
}
The result you get on res variable is 18.
Thanks!
What you get is correct 18==12(Hex) as you passed to to_bcd.
static byte[] ToBCD(DateTime d)
{
List<byte> bytes = new List<byte>();
string s = d.ToString("yyyyMMddHHmm");
for (int i = 0; i < s.Length; i+=2 )
{
bytes.Add((byte)((s[i] - '0') << 4 | (s[i+1] - '0')));
}
return bytes.ToArray();
}
I'll give a short example to demonstrate the idea. You can extend this solution to your whole date format input.
The BCD format encapsulates exactly two decimal digits into one 8-bit number. For example, the representation of 92 would be, in binary:
1001 0010
or 0x92 in hex. This happens to be 146 when converted to decimal.
The code to do this would need to shift the first digit left by 4 bits and then combine with the second digit. So:
byte to_bcd(int n)
{
// extract each digit from the input number n
byte d1 = n / 10;
byte d2 = n % 10;
// combine the decimal digits into a BCD number
return (d1 << 4) | d2;
}
Related
Community,
Assume we have a random integer which is in the range Int32.MinValue - Int32.MaxValue.
I'd like find two numbers which result in this integer when calculated together using the right shift operator.
An example :
If the input value is 123456 two possible output values could be 2022703104 and 14, because 2022703104 >> 14 == 123456
Here is my attempt:
private static int[] DetermineShr(int input)
{
int[] arr = new int[2];
if (input == 0)
{
arr[0] = 0;
arr[1] = 0;
return arr;
}
int a = (int)Math.Log(int.MaxValue / Math.Abs(input), 2);
int b = (int)(input * Math.Pow(2, a));
arr[0] = a;
arr[1] = b;
return arr;
}
However for some negativ values it doesn't work, the output won't result in a correct calculation.
And for very small input values such as -2147483648 its throwing an exception :
How can I modify my function so it will produce a valid output for all input values between Int32.MinValue and Int32.MaxValue ?
Well, let's compare
123456 == 11110001001000000
2022703104 == 1111000100100000000000000000000
can you see the pattern? If you're given shift (14 in your case) the answer is
(123456 << shift) + any number in [0..2 ** (shift-1)] range
however, on large values left shift can result in integer overflow; if shift is small (less than 32) I suggest using long:
private static long Factor(int source, int shift) {
unchecked {
// (uint): we want bits, not two complement
long value = (uint) source;
return value << shift;
}
}
Test:
int a = -1;
long b = Factor(-1, 3);
Console.WriteLine(a);
Console.WriteLine(Convert.ToString(a, 2));
Console.WriteLine(b);
Console.WriteLine(Convert.ToString(b, 2))
will return
-1
11111111111111111111111111111111
34359738360
11111111111111111111111111111111000
please, notice, that negative integers being two's complements
https://en.wikipedia.org/wiki/Two%27s_complement
are, in fact, quite large when treated as unsigned integers
I want to compare a stream of bits of arbitrary length to a mask in c# and return a ratio of how many bits were the same.
The mask to check against is anywhere between 2 bits long to 8k (with 90% of the masks being 5 bits long), the input can be anywhere between 2 bits up to ~ 500k, with an average input string of 12k (but yeah, most of the time it will be comparing 5 bits with the first 5 bits of that 12k)
Now my naive implementation would be something like this:
bool[] mask = new[] { true, true, false, true };
float dendrite(bool[] input) {
int correct = 0;
for ( int i = 0; i<mask.length; i++ ) {
if ( input[i] == mask[i] )
correct++;
}
return (float)correct/(float)mask.length;
}
but I expect this is better handled (more efficient) with some kind of binary operator magic?
Anyone got any pointers?
EDIT: the datatype is not fixed at this point in my design, so if ints or bytearrays work better, I'd also be a happy camper, trying to optimize for efficiency here, the faster the computation, the better.
eg if you can make it work like this:
int[] mask = new[] { 1, 1, 0, 1 };
float dendrite(int[] input) {
int correct = 0;
for ( int i = 0; i<mask.length; i++ ) {
if ( input[i] == mask[i] )
correct++;
}
return (float)correct/(float)mask.length;
}
or this:
int mask = 13; //1101
float dendrite(int input) {
return // your magic here;
} // would return 0.75 for an input
// of 101 given ( 1100101 in binary,
// matches 3 bits of the 4 bit mask == .75
ANSWER:
I ran each proposed answer against each other and Fredou's and Marten's solution ran neck to neck but Fredou submitted the fastest leanest implementation in the end. Of course since the average result varies quite wildly between implementations I might have to revisit this post later on. :) but that's probably just me messing up in my test script. ( i hope, too late now, going to bed =)
sparse1.Cyclone
1317ms 3467107ticks 10000iterations
result: 0,7851563
sparse1.Marten
288ms 759362ticks 10000iterations
result: 0,05066964
sparse1.Fredou
216ms 568747ticks 10000iterations
result: 0,8925781
sparse1.Marten
296ms 778862ticks 10000iterations
result: 0,05066964
sparse1.Fredou
216ms 568601ticks 10000iterations
result: 0,8925781
sparse1.Marten
300ms 789901ticks 10000iterations
result: 0,05066964
sparse1.Cyclone
1314ms 3457988ticks 10000iterations
result: 0,7851563
sparse1.Fredou
207ms 546606ticks 10000iterations
result: 0,8925781
sparse1.Marten
298ms 786352ticks 10000iterations
result: 0,05066964
sparse1.Cyclone
1301ms 3422611ticks 10000iterations
result: 0,7851563
sparse1.Marten
292ms 769850ticks 10000iterations
result: 0,05066964
sparse1.Cyclone
1305ms 3433320ticks 10000iterations
result: 0,7851563
sparse1.Fredou
209ms 551178ticks 10000iterations
result: 0,8925781
( testscript copied here, if i destroyed yours modifying it lemme know. https://dotnetfiddle.net/h9nFSa )
how about this one - dotnetfiddle example
using System;
namespace ConsoleApplication1
{
public class Program
{
public static void Main(string[] args)
{
int a = Convert.ToInt32("0001101", 2);
int b = Convert.ToInt32("1100101", 2);
Console.WriteLine(dendrite(a, 4, b));
}
private static float dendrite(int mask, int len, int input)
{
return 1 - getBitCount(mask ^ (input & (int.MaxValue >> 32 - len))) / (float)len;
}
private static int getBitCount(int bits)
{
bits = bits - ((bits >> 1) & 0x55555555);
bits = (bits & 0x33333333) + ((bits >> 2) & 0x33333333);
return ((bits + (bits >> 4) & 0xf0f0f0f) * 0x1010101) >> 24;
}
}
}
64 bits one here - dotnetfiddler
using System;
namespace ConsoleApplication1
{
public class Program
{
public static void Main(string[] args)
{
// 1
ulong a = Convert.ToUInt64("0000000000000000000000000000000000000000000000000000000000001101", 2);
ulong b = Convert.ToUInt64("1110010101100101011001010110110101100101011001010110010101100101", 2);
Console.WriteLine(dendrite(a, 4, b));
}
private static float dendrite(ulong mask, int len, ulong input)
{
return 1 - getBitCount(mask ^ (input & (ulong.MaxValue >> (64 - len)))) / (float)len;
}
private static ulong getBitCount(ulong bits)
{
bits = bits - ((bits >> 1) & 0x5555555555555555UL);
bits = (bits & 0x3333333333333333UL) + ((bits >> 2) & 0x3333333333333333UL);
return unchecked(((bits + (bits >> 4)) & 0xF0F0F0F0F0F0F0FUL) * 0x101010101010101UL) >> 56;
}
}
}
I came up with this code:
static float dendrite(ulong input, ulong mask)
{
// get bits that are same (0 or 1) in input and mask
ulong samebits = mask & ~(input ^ mask);
// count number of same bits
int correct = cardinality(samebits);
// count number of bits in mask
int inmask = cardinality(mask);
// compute fraction (0.0 to 1.0)
return inmask == 0 ? 0f : correct / (float)inmask;
}
// this is a little hack to count the number of bits set to one in a 64-bit word
static int cardinality(ulong word)
{
const ulong mult = 0x0101010101010101;
const ulong mask1h = (~0UL) / 3 << 1;
const ulong mask2l = (~0UL) / 5;
const ulong mask4l = (~0UL) / 17;
word -= (mask1h & word) >> 1;
word = (word & mask2l) + ((word >> 2) & mask2l);
word += word >> 4;
word &= mask4l;
return (int)((word * mult) >> 56);
}
This will check 64-bits at a time. If you need more than that you can just split the input data into 64-bit words and compare them one by one and compute the average result.
Here's a .NET fiddle with the code and a working test case:
https://dotnetfiddle.net/5hYFtE
I would change the code to something along these lines:
// hardcoded bitmask
byte mask = 255;
float dendrite(byte input) {
int correct = 0;
// store the xor:ed result
byte xored = input ^ mask;
// loop through each bit
for(int i = 0; i < 8; i++) {
// if the bit is 0 then it was correct
if(!(xored & (1 << i)))
correct++;
}
return (float)correct/(float)mask.length;
}
The above uses a mask and input of 8 bits, but of course you could modify this to use a 4 byte integer and so on.
Not sure if this will work as expected, but it might give you some clues on how to proceed.
For example if you only would like to check the first 4 bits you could change the code to something like:
float dendrite(byte input) {
// hardcoded bitmask i.e 1101
byte mask = 13;
// number of bits to check
byte bits = 4;
int correct = 0;
// store the xor:ed result
byte xored = input ^ mask;
// loop through each bit, notice that we only checking the first 4 bits
for(int i = 0; i < bits; i++) {
// if the bit is 0 then it was correct
if(!(xored & (1 << i)))
correct++;
}
return (float)correct/(float)bits;
}
Of course it might be faster to actually use a int instead of a byte.
I need to convert int to bin and with extra bits.
string aaa = Convert.ToString(3, 2);
it returns 11, but I need 0011, or 00000011.
How is it done?
11 is binary representation of 3. The binary representation of this value is 2 bits.
3 = 20 * 1 + 21 * 1
You can use String.PadLeft(Int, Char) method to add these zeros.
// convert number 3 to binary string.
// And pad '0' to the left until string will be not less then 4 characters
Convert.ToString(3, 2).PadLeft(4, '0') // 0011
Convert.ToString(3, 2).PadLeft(8, '0') // 00000011
I've created a method to dynamically write leading zeroes
public static string ToBinary(int myValue)
{
string binVal = Convert.ToString(myValue, 2);
int bits = 0;
int bitblock = 4;
for (int i = 0; i < binVal.Length; i = i + bitblock)
{ bits += bitblock; }
return binVal.PadLeft(bits, '0');
}
At first we convert my value to binary.
Initializing the bits to set the length for binary output.
One Bitblock has 4 Digits. In for-loop we check the length of our converted binary value und adds the "bits" for the length for binary output.
Examples:
Input: 1 -> 0001;
Input: 127 -> 01111111
etc....
You can use these methods:
public static class BinaryExt
{
public static string ToBinary(this int number, int bitsLength = 32)
{
return NumberToBinary(number, bitsLength);
}
public static string NumberToBinary(int number, int bitsLength = 32)
{
string result = Convert.ToString(number, 2).PadLeft(bitsLength, '0');
return result;
}
public static int FromBinaryToInt(this string binary)
{
return BinaryToInt(binary);
}
public static int BinaryToInt(string binary)
{
return Convert.ToInt32(binary, 2);
}
}
Sample:
int number = 3;
string byte3 = number.ToBinary(8); // output: 00000011
string bits32 = BinaryExt.NumberToBinary(3); // output: 00000000000000000000000000000011
public static String HexToBinString(this String value)
{
String binaryString = Convert.ToString(Convert.ToInt32(value, 16), 2);
Int32 zeroCount = Convert.ToInt32(Math.Ceiling(Convert.ToDouble(binaryString.Length) / 8)) * 8;
return binaryString.PadLeft(zeroCount, '0');
}
Just what Soner answered use:
Convert.ToString(3, 2).PadLeft(4, '0')
Just want to add just for you to know. The int parameter is the total number of characters that your string and the char parameter is the character that will be added to fill the lacking space in your string. In your example, you want the output 0011 which which is 4 characters and needs 0's thus you use 4 as int param and '0' in char.
string aaa = Convert.ToString(3, 2).PadLeft(10, '0');
This may not be the most elegant solution but it is the fastest from my testing:
string IntToBinary(int value, int totalDigits) {
char[] output = new char[totalDigits];
int diff = sizeof(int) * 8 - totalDigits;
for (int n = 0; n != totalDigits; ++n) {
output[n] = (char)('0' + (char)((((uint)value << (n + diff))) >> (sizeof(int) * 8 - 1)));
}
return new string(output);
}
string LongToBinary(int value, int totalDigits) {
char[] output = new char[totalDigits];
int diff = sizeof(long) * 8 - totalDigits;
for (int n = 0; n != totalDigits; ++n) {
output[n] = (char)('0' + (char)((((ulong)value << (n + diff))) >> (sizeof(long) * 8 - 1)));
}
return new string(output);
}
This version completely avoids if statements and therfore branching which creates very fast and most importantly linear code. This beats the Convert.ToString() function from microsoft by up to 50%
Here is some benchmark code
long testConv(Func<int, int, string> fun, int value, int digits, long avg) {
long result = 0;
for (long n = 0; n < avg; n++) {
var sw = Stopwatch.StartNew();
fun(value, digits);
result += sw.ElapsedTicks;
}
Console.WriteLine((string)fun(value, digits));
return result / (avg / 100);//for bigger output values
}
string IntToBinary(int value, int totalDigits) {
char[] output = new char[totalDigits];
int diff = sizeof(int) * 8 - totalDigits;
for (int n = 0; n != totalDigits; ++n) {
output[n] = (char)('0' + (char)((((uint)value << (n + diff))) >> (sizeof(int) * 8 - 1)));
}
return new string(output);
}
string Microsoft(int value, int totalDigits) {
return Convert.ToString(value, toBase: 2).PadLeft(totalDigits, '0');
}
int v = 123, it = 10000000;
Console.WriteLine(testConv(Microsoft, v, 10, it));
Console.WriteLine(testConv(IntToBinary, v, 10, it));
Here are my results
0001111011
122
0001111011
75
Microsofts Method takes 1.22 ticks while mine only takes 0.75 ticks
With this you can get binary representation of string with corresponding leading zeros.
string binaryString = Convert.ToString(3, 2);;
int myOffset = 4;
string modified = binaryString.PadLeft(binaryString.Length % myOffset == 0 ? binaryString.Length : binaryString.Length + (myOffset - binaryString.Length % myOffset), '0'));
In your case modified string will be 0011, if you want you can change offset to 8, for instance, and you will get 00000011 and so on.
I am trying to convert a short type into 2 bytes type for store in a byte array, here is the snippet thats been working well "so far".
if (type == "short")
{
size = data.size;
databuffer[index+1] = (byte)(data.numeric_data >> 8);
databuffer[index] = (byte)(data.numeric_data & 255);
return size;
}
Numeric_data is int type. It all worked well till i process the value 284 (decimal). It turns out that 284 >> 8 is 1 instead of 4.
The main goal is to have:
byte[0] = 28
byte[1] = 4
Is this what you are looking for:
static void Main(string[] args)
{
short data=284;
byte[] bytes=BitConverter.GetBytes(data);
// bytes[0] = 28
// bytes[1] = 1
}
Just for fun:
public static byte[] ToByteArray(short s)
{
//return, if `short` can be cast to `byte` without overflow
if (s <= byte.MaxValue)
return new byte[] { (byte)s };
List<byte> bytes = new List<byte>();
byte b = 0;
//determine delta through the number of digits
short delta = (short)Math.Pow(10, s.ToString().Length - 3);
//as soon as byte can be not more than 3 digits length
for (int i = 0; i < 3; i++)
{
//take first 3 (or 2, or 1) digits from the high-order digit
short temp = (short)(s / delta);
if (temp > byte.MaxValue) //if it's still too big
delta *= 10;
else //the byte is found, break the loop
{
b = (byte)temp;
break;
}
}
//add the found byte
bytes.Add(b);
//recursively search in the rest of the number
bytes.AddRange(ToByteArray((short)(s % delta)));
return bytes.ToArray();
}
this recursive method does what the OP need with at least any positive short value.
Why would 284 >> 8 would be 4?
Why would 284 be split in two bytes equal to 28 and 4?
The binary representation of 284 is 0000 0001 0001 1100. As you can see, there are two bytes (eight bits) which are 0000 0001 (256 in decimal) and 0001 1100 (28 in decimal).
284 >> 8 is 1 (0000 0001) and it is correct.
284 should be split in two bytes equal to 256 and 24.
You conversion is correct!
If you insist:
short val = 284;
byte a = (byte)(val / 10);
byte b = (byte)(val % 10);
Disclaimer:
This does not make much sense, but it is what you want. I assume you want values from 0 to 99. The logical thing to do would be to use 100 as the denominator and not 10. But then again, I have no idea what you want to do.
Drop the nonsense conversion you are using and go for System.BitConverter.ToInt16
//to bytes
var buffer = System.BitConverter.GetBytes(284); //your short value
//from bytes
var value = System.BitConverter.ToInt16(buffer, 0);
I want to convert an int to a byte[2] array using BCD.
The int in question will come from DateTime representing the Year and must be converted to two bytes.
Is there any pre-made function that does this or can you give me a simple way of doing this?
example:
int year = 2010
would output:
byte[2]{0x20, 0x10};
static byte[] Year2Bcd(int year) {
if (year < 0 || year > 9999) throw new ArgumentException();
int bcd = 0;
for (int digit = 0; digit < 4; ++digit) {
int nibble = year % 10;
bcd |= nibble << (digit * 4);
year /= 10;
}
return new byte[] { (byte)((bcd >> 8) & 0xff), (byte)(bcd & 0xff) };
}
Beware that you asked for a big-endian result, that's a bit unusual.
Use this method.
public static byte[] ToBcd(int value){
if(value<0 || value>99999999)
throw new ArgumentOutOfRangeException("value");
byte[] ret=new byte[4];
for(int i=0;i<4;i++){
ret[i]=(byte)(value%10);
value/=10;
ret[i]|=(byte)((value%10)<<4);
value/=10;
}
return ret;
}
This is essentially how it works.
If the value is less than 0 or greater than 99999999, the value won't fit in four bytes. More formally, if the value is less than 0 or is 10^(n*2) or greater, where n is the number of bytes, the value won't fit in n bytes.
For each byte:
Set that byte to the remainder of the value-divided-by-10 to the byte. (This will place the last digit in the low nibble [half-byte] of the current byte.)
Divide the value by 10.
Add 16 times the remainder of the value-divided-by-10 to the byte. (This will place the now-last digit in the high nibble of the current byte.)
Divide the value by 10.
(One optimization is to set every byte to 0 beforehand -- which is implicitly done by .NET when it allocates a new array -- and to stop iterating when the value reaches 0. This latter optimization is not done in the code above, for simplicity. Also, if available, some compilers or assemblers offer a divide/remainder routine that allows retrieving the quotient and remainder in one division step, an optimization which is not usually necessary though.)
Here's a terrible brute-force version. I'm sure there's a better way than this, but it ought to work anyway.
int digitOne = year / 1000;
int digitTwo = (year - digitOne * 1000) / 100;
int digitThree = (year - digitOne * 1000 - digitTwo * 100) / 10;
int digitFour = year - digitOne * 1000 - digitTwo * 100 - digitThree * 10;
byte[] bcdYear = new byte[] { digitOne << 4 | digitTwo, digitThree << 4 | digitFour };
The sad part about it is that fast binary to BCD conversions are built into the x86 microprocessor architecture, if you could get at them!
Here is a slightly cleaner version then Jeffrey's
static byte[] IntToBCD(int input)
{
if (input > 9999 || input < 0)
throw new ArgumentOutOfRangeException("input");
int thousands = input / 1000;
int hundreds = (input -= thousands * 1000) / 100;
int tens = (input -= hundreds * 100) / 10;
int ones = (input -= tens * 10);
byte[] bcd = new byte[] {
(byte)(thousands << 4 | hundreds),
(byte)(tens << 4 | ones)
};
return bcd;
}
maybe a simple parse function containing this loop
i=0;
while (id>0)
{
twodigits=id%100; //need 2 digits per byte
arr[i]=twodigits%10 + twodigits/10*16; //first digit on first 4 bits second digit shifted with 4 bits
id/=100;
i++;
}
More common solution
private IEnumerable<Byte> GetBytes(Decimal value)
{
Byte currentByte = 0;
Boolean odd = true;
while (value > 0)
{
if (odd)
currentByte = 0;
Decimal rest = value % 10;
value = (value-rest)/10;
currentByte |= (Byte)(odd ? (Byte)rest : (Byte)((Byte)rest << 4));
if(!odd)
yield return currentByte;
odd = !odd;
}
if(!odd)
yield return currentByte;
}
Same version as Peter O. but in VB.NET
Public Shared Function ToBcd(ByVal pValue As Integer) As Byte()
If pValue < 0 OrElse pValue > 99999999 Then Throw New ArgumentOutOfRangeException("value")
Dim ret As Byte() = New Byte(3) {} 'All bytes are init with 0's
For i As Integer = 0 To 3
ret(i) = CByte(pValue Mod 10)
pValue = Math.Floor(pValue / 10.0)
ret(i) = ret(i) Or CByte((pValue Mod 10) << 4)
pValue = Math.Floor(pValue / 10.0)
If pValue = 0 Then Exit For
Next
Return ret
End Function
The trick here is to be aware that simply using pValue /= 10 will round the value so if for instance the argument is "16", the first part of the byte will be correct, but the result of the division will be 2 (as 1.6 will be rounded up). Therefore I use the Math.Floor method.
I made a generic routine posted at IntToByteArray that you could use like:
var yearInBytes = ConvertBigIntToBcd(2010, 2);
static byte[] IntToBCD(int input) {
byte[] bcd = new byte[] {
(byte)(input>> 8),
(byte)(input& 0x00FF)
};
return bcd;
}