Direct convertation between ascii byte[] and int - c#

I have a program which reads bytes from the network. Sometimes, those bytes are string representations of integer in decimal or hexadecimal form.
Normally, I parse this with something like
var s=Encoding.ASCII.GetString(p.GetBuffer(),0,(int)p.Length);
int.TryParse(s, out number);
I feel that this is wasteful, as it has to allocate memory to the string without any need for it.
Is there a better way I can do it in c#?
UPDATE
I've seen several suggestions to use BitConverter class. This is not what I need. BitConverter will transform binary representation of int (4 bytes) into int type, but since the int is in ascii form, this doesn't apply here.

I doubt it will have a substantial impact on performance or memory consumption, but you can do this relatively easily. One implementation for converting decimal numbers is shown below:
private static int IntFromDecimalAscii(byte[] bytes)
{
int result = 0;
// For each digit, add the digit's value times 10^n, where n is the
// column number counting from right to left starting at 0.
for(int i = 0; i < bytes.Length; ++i)
{
// ASCII digits are in the range 48 <= n <= 57. This code only
// makes sense if we are dealing exclusively with digits, so
// throw if we encounter a non-digit character
if(bytes[i] < 48 || bytes[i] > 57)
{
throw new ArgumentException("Non-digit character present", "bytes");
}
// The bytes are in order from most to least significant, so
// we need to reverse the index to get the right column number
int exp = bytes.Length - i - 1;
// Digits in ASCII start with 0 at 48, and move sequentially
// to 9 at 57, so we can simply subtract 48 from a valid digit
// to get its numeric value
int digitValue = bytes[i] - 48;
// Finally, add the digit value times the column value to the
// result accumulator
result += digitValue * (int)Math.Pow(10, exp);
}
return result;
}
This can easily be adapted to convert hex values as well:
private static int IntFromHexAscii(byte[] bytes)
{
int result = 0;
for(int i = 0; i < bytes.Length; ++i)
{
// ASCII hex digits are a bit more complex than decimal.
if(bytes[i] < 48 || bytes[i] > 71 || (bytes[i] > 57 && bytes[i] < 65))
{
throw new ArgumentException("Non-digit character present", "bytes");
}
int exp = bytes.Length - i - 1;
// Assume decimal first, then fix it if it's actually hex.
int digitValue = bytes[i] - 48;
// This is safe because we already excluded all non-digit
// characters above
if(bytes[i] > 57) // A-F
{
digitValue = bytes[i] - 55;
}
// For hex, we use 16^n instead of 10^n
result += digitValue * (int)Math.Pow(16, exp);
}
return result;
}

Well, you could be a little less wasteful (at least in the number of source code characters sense) by avoiding the s declaration like:
int.TryParse(Encoding.ASCII.GetString(p.GetBuffer(),0,(int)p.Length), out number);
But, I think the only other real way to get a speed-up would be to do as the commenter suggests and hard code a mapping into a Dictionary or something. This could save some time if you have to do this a lot, but it may not be worth the effort...

Related

Incorrect values when converting char digits to int

My end goal is to take a number like 29, pull it apart and then add the two integers that result. So, if the number is 29, for example, the answer would be 2 + 9 = 11.
When I'm debugging, I can see that those values are being held, but it appears that other values are also being incorrect in this case 50, 57. So, my answer is 107. I have no idea where these values are coming from and I don't know where to begin to fix it.
My code is:
class Program
{
static void Main(string[] args)
{
int a = 29;
int answer = addTwoDigits(a);
Console.ReadLine();
}
public static int addTwoDigits(int n)
{
string number = n.ToString();
char[] a = number.ToCharArray();
int total = 0;
for (int i = 0; i < a.Length; i++)
{
total = total + a[i];
}
return total;
}
}
As mentioned the issue with your code is that characters have a ASCII code value when you cast to int which doesn't match with the various numerical digits. Instead of messing with strings and characters just use good old math instead.
public static int AddDigits(int n)
{
int total = 0;
while(n>0)
{
total += n % 10;
n /= 10;
}
return total;
}
Modulo by 10 will result in the least significant digit and because integer division truncates n /= 10 will truncate the least significant digit and eventually become 0 when you run out of digits.
Your code is actually additioning the decimal value of the char.
Take a look at https://www.cs.cmu.edu/~pattis/15-1XX/common/handouts/ascii.html
Decimal value of 2 and 9 are 50 and 57 respectively. You need to convert the char into a int before doing your addition.
int val = (int)Char.GetNumericValue(a[i]);
Try this:
public static int addTwoDigits(int n)
{
string number = n.ToString();
char[] a = number.ToCharArray();
int total = 0;
for (int i = 0; i < a.Length; i++)
{
total = total + (int)Char.GetNumericValue(a[i]);
}
return total;
}
Converted number to char always returns ASCII code.. So you can use GetNumericValue() method for getting value instead of ASCII code
Just for fun, I thought I'd see if I could do it in one line using LINQ and here it is:
public static int AddWithLinq(int n)
{
return n.ToString().Aggregate(0, (total, c) => total + int.Parse(c.ToString()));
}
I don't think it would be particularly "clean" code, but it may be educational at best!
You should you int.TryParse
int num;
if (int.TryParse(a[i].ToString(), out num))
{
total += num;
}
Your problem is that you're adding char values. Remember that the char is an integer value that represents a character in ASCII. When you are adding a[i] to total value, you're adding the int value that represents that char, the compiler automatic cast it.
The problem is in this code line:
total = total + a[i];
The code above is equal to this code line:
total += (int)a[i];
// If a[i] = '2', the character value of the ASCII table is 50.
// Then, (int)a[i] = 50.
To solve your problem, you must change that line by this:
total = (int)Char.GetNumericValue(a[i]);
// If a[i] = '2'.
// Then, (int)Char.GetNumericValue(int)a[i] = 2.
You can see this answer to see how to convert a numeric value
from char to int.
At this page you can see the ASCII table of values.
public static int addTwoDigits(int n)
{
string number = n.ToString()
char[] a = number.ToCharArray();
int total = 0;
for (int i = 0; i < a.Length; i++)
{
total += Convert.ToInt32(number[i].ToString());
}
return total;
}
You don't need to convert the number to a string to find the digits. #juharr already explained how you can calculate the digits and the total in a loop. The following is a recursive version :
int addDigit(int total,int n)
{
return (n<10) ? total + n
: addDigit(total += n % 10,n /= 10);
}
Which can be called with addDigit(0,234233433)and returns 27. If n is less than 10, we are counting the last digit. Otherwise extract the digit and add it to the total then divide by 10 and repeat.
One could get clever and use currying to get rid of the initial total :
int addDigits(int i)=>addDigit(0,i);
addDigits(234233433) also returns 27;
If the number is already a string, one could take advantage of the fact that a string can be treated as a Char array, and chars can be converted to ints implicitly :
var total = "234233433".Sum(c=>c-'0');
This can handle arbitrarily large strings, as long as the total doesn't exceed int.MaxValue, eg:
"99999999999999999999".Sum(x=>x-'0'); // 20 9s returns 180
Unless the number is already in string form though, this isn't efficient nor does it verify that the contents are an actual number.

Calculating a checksum in C#

I am writing a checksum for a manifest file for a courrier based system written in C# in the .NET environment.
I need to have an 8 digit field representing the checksum which is calculated as per the following:
Record Check Sum Algorithm
Form the 32-bit arithmetic sum of the products of
• the 7 low order bits of each ASCII character in the record
• the position of each character in the record numbered from 1 for the first character.
for the length of the record up to but excluding the check sum field itself :
Sum = Σi ASCII( ith character in the record ).( i )
where i runs over the length of the record excluding the check sum field.
After performing this calculation, convert the resultant sum to binary and split the 32 low order
bits of the Sum into eight blocks of 4 bits (octets). Note that each of the octets has a decimal
number value ranging from 0 to 15.
Add an offset of ASCII 0 ( zero ) to each octet to form an ASCII code number.
Convert the ASCII code number to its equivalent ASCII character thus forming printable
characters in the range 0123456789:;<=>?.
Concatenate each of these characters to form a single string of eight (8) characters in overall
length.
I am not the greatest at mathematics so I am struggling to write the code correctly as per the documentation.
I have written the following so far:
byte[] sumOfAscii = null;
for(int i = 1; i< recordCheckSum.Length; i++)
{
string indexChar = recordCheckSum.ElementAt(i).ToString();
byte[] asciiChar = Encoding.ASCII.GetBytes(indexChar);
for(int x = 0; x<asciiChar[6]; x++)
{
sumOfAscii += asciiChar[x];
}
}
//Turn into octets
byte firstOctet = 0;
for(int i = 0;i< sumOfAscii[6]; i++)
{
firstOctet += recordCheckSum;
}
Where recordCheckSum is a string made up of deliveryAddresses, product names etc and excludes the 8-digit checksum.
Any help with calculating this would be greatly appreciated as I am struggling.
There are notes in line as I go along. Some more notes on the calculation at the end.
uint sum = 0;
uint zeroOffset = 0x30; // ASCII '0'
byte[] inputData = Encoding.ASCII.GetBytes(recordCheckSum);
for (int i = 0; i < inputData.Length; i++)
{
int product = inputData[i] & 0x7F; // Take the low 7 bits from the record.
product *= i + 1; // Multiply by the 1 based position.
sum += (uint)product; // Add the product to the running sum.
}
byte[] result = new byte[8];
for (int i = 0; i < 8; i++) // if the checksum is reversed, make this:
// for (int i = 7; i >=0; i--)
{
uint current = (uint)(sum & 0x0f); // take the lowest 4 bits.
current += zeroOffset; // Add '0'
result[i] = (byte)current;
sum = sum >> 4; // Right shift the bottom 4 bits off.
}
string checksum = Encoding.ASCII.GetString(result);
One note, I use the & and >> operators, which you may or may not be familiar with. The & operator is the bitwise and operator. The >> operator is logical shift right.

Converting UInt64 to a binary array

I am having problem with this method I wrote to convert UInt64 to a binary array. For some numbers I am getting incorrect binary representation.
Results
Correct
999 = 1111100111
Correct
18446744073709551615 = 1111111111111111111111111111111111111111111111111111111111111111
Incorrect?
18446744073709551614 =
0111111111111111111111111111111111111111111111111111111111111110
According to an online converter the binary value of 18446744073709551614 should be
1111111111111111111111111111111111111111111111111111111111111110
public static int[] GetBinaryArray(UInt64 n)
{
if (n == 0)
{
return new int[2] { 0, 0 };
}
var val = (int)(Math.Log(n) / Math.Log(2));
if (val == 0)
val++;
var arr = new int[val + 1];
for (int i = val, j = 0; i >= 0 && j <= val; i--, j++)
{
if ((n & ((UInt64)1 << i)) != 0)
arr[j] = 1;
else
arr[j] = 0;
}
return arr;
}
FYI: This is not a homework assignment, I require to convert an integer to binary array for encryption purposes, hence the need for an array of bits. Many solutions I have found on this site convert an integer to string representation of binary number which was useless so I came up with this mashup of various other methods.
An explanation as to why the method works for some numbers and not others would be helpful. Yes I used Math.Log and it is slow, but performance can be fixed later.
EDIT: And yes I do need the line where I use Math.Log because my array will not always be 64 bits long, for example if my number was 4 then in binary it is 100 which is array length 3. It is a requirement of my application to do it this way.
It's not the returned array for the input UInt64.MaxValue - 1 which is wrong, it seems like UInt64.MaxValue is wrong.
The array is 65 elements long. This is intuitively wrong because UInt64.MaxValue must fit in 64 bits.
Firstly, instead of doing a natural log and dividing by a log to base 2, you can just do a log to base 2.
Secondly, you also need to do a Math.Ceiling on the returned value because you need the value to fit fully inside the number of bits. Discarding the remainder with a cast to int means that you need to arbitrarily do a val + 1 when declaring the result array. This is only correct for certain scenarios - one of which it is not correct for is... UInt64.MaxValue. Adding one to the number of bits necessary gives a 65-element array.
Thirdly, and finally, you cannot left-shift 64 bits, hence i = val - 1 in the for loop initialization.
Haven't tested this exhaustively...
public static int[] GetBinaryArray(UInt64 n)
{
if (n == 0)
{
return new int[2] { 0, 0 };
}
var val = (int)Math.Ceiling(Math.Log(n,2));
if (val == 0)
val++;
var arr = new int[val];
for (int i = val-1, j = 0; i >= 0 && j <= val; i--, j++)
{
if ((n & ((UInt64)1 << i)) != 0)
arr[j] = 1;
else
arr[j] = 0;
}
return arr;
}

C# Parity Bits from a Binary Number [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How to add even parity bit on 7-bit binary number
This is my new code which converts a 7-bit binary number to an 8-bit with even parity. However it does not work. When I type in 0101010 for example, it says the number with even parity is 147. Could you help, showing me what is wrong please?
using System;
using System.Collections.Generic;
using System.Collections;
using System.Linq;
using System.Text;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Please enter a 7-bit binary number:");
int a = Convert.ToInt32(Console.ReadLine());
byte[] numberAsByte = new byte[] { (byte)a };
System.Collections.BitArray bits = new System.Collections.BitArray(numberAsByte);
a = a << 1;
int count = 0;
for (int i = 0; i < 8; i++)
{
if (bits[i])
{
count++;
}
if (count % 2 == 1)
{
bits[7] = true;
}
bits.CopyTo(numberAsByte, 0);
a = numberAsByte[0];
Console.WriteLine("The number with an even parity bit is:");
Console.Write(a);
Console.ReadLine();
}
}
}
Use int.TryParse() on what you got from Console.ReadLine(). You then need to check that the number is between 0 and 127 to ensure it uses only 7 bits. You then need to count the number of 1s in the binary representation of the number. And add 128 to the number to set the parity bit, depending whether you specified odd or even parity.
Counting 1s is your real homework assignment.
By using the BitArray class you can write
int a = Convert.ToInt32(Console.ReadLine());
byte[] numberAsByte = new byte[] { (byte)a };
BitArray bits = new BitArray(numberAsByte);
This converts the single bits of your byte to a BitArray, which represents an array of Booleans that can be handled in a easy way. Note that the constructor of BitArray accepts an array of bytes. Since we have only one byte, we have to pass it a byte array of length 1 containing this single byte (numberAsByte).
Now let us count the bits that are set.
int count = 0;
for (int i = 0; i < 8; i++) {
if (bits[i]) {
count++;
}
}
Note that we simply test for a bit with bits[i], which yields a Boolean value. The test bits[i] == true as being perfectly legal and correct yields the same result but is unnecessarily complicated. The if statement does not require a comparison. All it wants is a Boolean value.
This calculates an odd parity bit.
if (count % 2 == 1) { // Odd number of bits
bits[7] = true; // Set the left most bit as parity bit for even parity.
}
The % operator is the modulo operator. It yields the rest of an integer division. x % 2 yields 0 if x is even. If you want an odd parity bit, you can test count % 2 == 0 instead.
BitArray has a CopyTo method which converts our bits back to an array of bytes (containing only one byte in our case).
bits.CopyTo(numberAsByte, 0);
a = numberAsByte[0];
numberAsByte[0] contains our number with a parity bit.
If you want the parity bit on the right side, then you will have to shift the number to the left by one bit first.
int a = Convert.ToInt32(Console.ReadLine());
a = a << 1;
// Do the parity bit calculation as above and, if necessary
// set the right most bit as parity bit.
bits[0] = true;
According to wikipedia there are two variation of parity bits, so I implemented parameter to select the one you need. It supports user input up to 63 bits, i'm leaving implementation of validation code to you.
ulong GetNumberParity(string input, bool isEvenParity)
{
ulong tmp = Convert.ToUInt64(input, 2);
ulong c = 0;
for (int i = 0; i < 64; i++) c += tmp >> i & 1;
if(isEvenParity)
return Convert.ToUInt64((c % 2 != 0 ? "1" : "0") + input, 2);
else
return Convert.ToUInt64((c % 2 == 0? "1" : "0") + input, 2);
}

Converting a int to a BCD byte array

I want to convert an int to a byte[2] array using BCD.
The int in question will come from DateTime representing the Year and must be converted to two bytes.
Is there any pre-made function that does this or can you give me a simple way of doing this?
example:
int year = 2010
would output:
byte[2]{0x20, 0x10};
static byte[] Year2Bcd(int year) {
if (year < 0 || year > 9999) throw new ArgumentException();
int bcd = 0;
for (int digit = 0; digit < 4; ++digit) {
int nibble = year % 10;
bcd |= nibble << (digit * 4);
year /= 10;
}
return new byte[] { (byte)((bcd >> 8) & 0xff), (byte)(bcd & 0xff) };
}
Beware that you asked for a big-endian result, that's a bit unusual.
Use this method.
public static byte[] ToBcd(int value){
if(value<0 || value>99999999)
throw new ArgumentOutOfRangeException("value");
byte[] ret=new byte[4];
for(int i=0;i<4;i++){
ret[i]=(byte)(value%10);
value/=10;
ret[i]|=(byte)((value%10)<<4);
value/=10;
}
return ret;
}
This is essentially how it works.
If the value is less than 0 or greater than 99999999, the value won't fit in four bytes. More formally, if the value is less than 0 or is 10^(n*2) or greater, where n is the number of bytes, the value won't fit in n bytes.
For each byte:
Set that byte to the remainder of the value-divided-by-10 to the byte. (This will place the last digit in the low nibble [half-byte] of the current byte.)
Divide the value by 10.
Add 16 times the remainder of the value-divided-by-10 to the byte. (This will place the now-last digit in the high nibble of the current byte.)
Divide the value by 10.
(One optimization is to set every byte to 0 beforehand -- which is implicitly done by .NET when it allocates a new array -- and to stop iterating when the value reaches 0. This latter optimization is not done in the code above, for simplicity. Also, if available, some compilers or assemblers offer a divide/remainder routine that allows retrieving the quotient and remainder in one division step, an optimization which is not usually necessary though.)
Here's a terrible brute-force version. I'm sure there's a better way than this, but it ought to work anyway.
int digitOne = year / 1000;
int digitTwo = (year - digitOne * 1000) / 100;
int digitThree = (year - digitOne * 1000 - digitTwo * 100) / 10;
int digitFour = year - digitOne * 1000 - digitTwo * 100 - digitThree * 10;
byte[] bcdYear = new byte[] { digitOne << 4 | digitTwo, digitThree << 4 | digitFour };
The sad part about it is that fast binary to BCD conversions are built into the x86 microprocessor architecture, if you could get at them!
Here is a slightly cleaner version then Jeffrey's
static byte[] IntToBCD(int input)
{
if (input > 9999 || input < 0)
throw new ArgumentOutOfRangeException("input");
int thousands = input / 1000;
int hundreds = (input -= thousands * 1000) / 100;
int tens = (input -= hundreds * 100) / 10;
int ones = (input -= tens * 10);
byte[] bcd = new byte[] {
(byte)(thousands << 4 | hundreds),
(byte)(tens << 4 | ones)
};
return bcd;
}
maybe a simple parse function containing this loop
i=0;
while (id>0)
{
twodigits=id%100; //need 2 digits per byte
arr[i]=twodigits%10 + twodigits/10*16; //first digit on first 4 bits second digit shifted with 4 bits
id/=100;
i++;
}
More common solution
private IEnumerable<Byte> GetBytes(Decimal value)
{
Byte currentByte = 0;
Boolean odd = true;
while (value > 0)
{
if (odd)
currentByte = 0;
Decimal rest = value % 10;
value = (value-rest)/10;
currentByte |= (Byte)(odd ? (Byte)rest : (Byte)((Byte)rest << 4));
if(!odd)
yield return currentByte;
odd = !odd;
}
if(!odd)
yield return currentByte;
}
Same version as Peter O. but in VB.NET
Public Shared Function ToBcd(ByVal pValue As Integer) As Byte()
If pValue < 0 OrElse pValue > 99999999 Then Throw New ArgumentOutOfRangeException("value")
Dim ret As Byte() = New Byte(3) {} 'All bytes are init with 0's
For i As Integer = 0 To 3
ret(i) = CByte(pValue Mod 10)
pValue = Math.Floor(pValue / 10.0)
ret(i) = ret(i) Or CByte((pValue Mod 10) << 4)
pValue = Math.Floor(pValue / 10.0)
If pValue = 0 Then Exit For
Next
Return ret
End Function
The trick here is to be aware that simply using pValue /= 10 will round the value so if for instance the argument is "16", the first part of the byte will be correct, but the result of the division will be 2 (as 1.6 will be rounded up). Therefore I use the Math.Floor method.
I made a generic routine posted at IntToByteArray that you could use like:
var yearInBytes = ConvertBigIntToBcd(2010, 2);
static byte[] IntToBCD(int input) {
byte[] bcd = new byte[] {
(byte)(input>> 8),
(byte)(input& 0x00FF)
};
return bcd;
}

Categories

Resources