I'm converting Binary numbers to decimal numbers fine until my number seems to surpass $2^64$ . It seems because the data type can't hold numbers larger than $2^64$ any insight? What happens is the number that is stored in my base_2 variable seems to not be able to surpass $2^64$ as it should when handling huge binary numbers but it overflows because the data type is too small and resets to 0...Any ideas how I can bypass this or fix this?
//Vector to store the Binary # that user has input
List<ulong> binaryVector = new List<ulong>();
//Vector to store the Decimal Vector I will output
List<string> decimalVector = new List<string>();
//Variable to store the input
string input = "";
//Variables to do conversions
ulong base2 = 1;
ulong decimalOutput = 0;
Console.WriteLine("2^64=" + Math.Pow(2.00,64));
//Prompt User
Console.WriteLine("Enter the Binary Number you would like to convert to decimal: ");
input = Console.ReadLine();
//Store the user input in a vector
for(int i = 0; i < input.Length; i++)
{
//If we find a 0, store it in the appropriate vector, otherwise we found a 1..
if (input[i].Equals('0'))
{
binaryVector.Add(0);
}
else
{
binaryVector.Add(1);
}
}
//Reverse the vector
binaryVector.Reverse();
//Convert the Binary # to Decimal
for(int i = 0; i < binaryVector.Count; i++)
{
//0101 For Example: 0 + (0*1) = 0 Thus: 0 is out current Decimal
//While our base2 variable is now a multiple of 2 (1 * 2 = 2)..
decimalOutput = decimalOutput + (binaryVector[i] * base2);
base2 = base2 * 2;
Console.WriteLine("\nTest base2 Output Position[" + i + "]::" + base2);
}
//Convert Decimal Output to String
string tempString = decimalOutput.ToString();
An ulong can only hold values between 0 and 2**64; see UInt64.MaxValue.
Use a BigInteger when you want to deal with bigger values.
Related
My end goal is to take a number like 29, pull it apart and then add the two integers that result. So, if the number is 29, for example, the answer would be 2 + 9 = 11.
When I'm debugging, I can see that those values are being held, but it appears that other values are also being incorrect in this case 50, 57. So, my answer is 107. I have no idea where these values are coming from and I don't know where to begin to fix it.
My code is:
class Program
{
static void Main(string[] args)
{
int a = 29;
int answer = addTwoDigits(a);
Console.ReadLine();
}
public static int addTwoDigits(int n)
{
string number = n.ToString();
char[] a = number.ToCharArray();
int total = 0;
for (int i = 0; i < a.Length; i++)
{
total = total + a[i];
}
return total;
}
}
As mentioned the issue with your code is that characters have a ASCII code value when you cast to int which doesn't match with the various numerical digits. Instead of messing with strings and characters just use good old math instead.
public static int AddDigits(int n)
{
int total = 0;
while(n>0)
{
total += n % 10;
n /= 10;
}
return total;
}
Modulo by 10 will result in the least significant digit and because integer division truncates n /= 10 will truncate the least significant digit and eventually become 0 when you run out of digits.
Your code is actually additioning the decimal value of the char.
Take a look at https://www.cs.cmu.edu/~pattis/15-1XX/common/handouts/ascii.html
Decimal value of 2 and 9 are 50 and 57 respectively. You need to convert the char into a int before doing your addition.
int val = (int)Char.GetNumericValue(a[i]);
Try this:
public static int addTwoDigits(int n)
{
string number = n.ToString();
char[] a = number.ToCharArray();
int total = 0;
for (int i = 0; i < a.Length; i++)
{
total = total + (int)Char.GetNumericValue(a[i]);
}
return total;
}
Converted number to char always returns ASCII code.. So you can use GetNumericValue() method for getting value instead of ASCII code
Just for fun, I thought I'd see if I could do it in one line using LINQ and here it is:
public static int AddWithLinq(int n)
{
return n.ToString().Aggregate(0, (total, c) => total + int.Parse(c.ToString()));
}
I don't think it would be particularly "clean" code, but it may be educational at best!
You should you int.TryParse
int num;
if (int.TryParse(a[i].ToString(), out num))
{
total += num;
}
Your problem is that you're adding char values. Remember that the char is an integer value that represents a character in ASCII. When you are adding a[i] to total value, you're adding the int value that represents that char, the compiler automatic cast it.
The problem is in this code line:
total = total + a[i];
The code above is equal to this code line:
total += (int)a[i];
// If a[i] = '2', the character value of the ASCII table is 50.
// Then, (int)a[i] = 50.
To solve your problem, you must change that line by this:
total = (int)Char.GetNumericValue(a[i]);
// If a[i] = '2'.
// Then, (int)Char.GetNumericValue(int)a[i] = 2.
You can see this answer to see how to convert a numeric value
from char to int.
At this page you can see the ASCII table of values.
public static int addTwoDigits(int n)
{
string number = n.ToString()
char[] a = number.ToCharArray();
int total = 0;
for (int i = 0; i < a.Length; i++)
{
total += Convert.ToInt32(number[i].ToString());
}
return total;
}
You don't need to convert the number to a string to find the digits. #juharr already explained how you can calculate the digits and the total in a loop. The following is a recursive version :
int addDigit(int total,int n)
{
return (n<10) ? total + n
: addDigit(total += n % 10,n /= 10);
}
Which can be called with addDigit(0,234233433)and returns 27. If n is less than 10, we are counting the last digit. Otherwise extract the digit and add it to the total then divide by 10 and repeat.
One could get clever and use currying to get rid of the initial total :
int addDigits(int i)=>addDigit(0,i);
addDigits(234233433) also returns 27;
If the number is already a string, one could take advantage of the fact that a string can be treated as a Char array, and chars can be converted to ints implicitly :
var total = "234233433".Sum(c=>c-'0');
This can handle arbitrarily large strings, as long as the total doesn't exceed int.MaxValue, eg:
"99999999999999999999".Sum(x=>x-'0'); // 20 9s returns 180
Unless the number is already in string form though, this isn't efficient nor does it verify that the contents are an actual number.
Below is the checksum description.
The checksum is four ASCII character digits representing the binary sum of the characters including the
first character of the transmission and up to and including the checksum field identifier characters.
To calculate the checksum add each character as an unsigned binary number, take the lower 16 bits of the
total and perform a 2's complement. The checksum field is the result represented by four hex digits.
To verify the correct checksum on received data, simply add all the hex values including the checksum. It
should equal zero.
this is the implementation for ASCII string, but my input string is UTF-8 now.
anyone give some idea to revise the implementation for UTF-8 encoding. Thanks very much.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace SIP2
{
// Adapted from VB.NET from the Library Tech Guy blog
// http://librarytechguy.blogspot.com/2009/11/sip2-checksum_13.html
public class CheckSum
{
public static string ApplyChecksum(string strMsg)
{
int intCtr;
char[] chrArray;
int intAscSum;
bool blnCarryBit;
string strBinVal = String.Empty;
string strInvBinVal;
string strNewBinVal = String.Empty;
// Transfer SIP message to a a character array.Loop through each character of the array,
// converting the character to an ASCII value and adding the value to a running total.
intAscSum = 0;
chrArray = strMsg.ToCharArray();
for (intCtr = 0; intCtr <= chrArray.Length - 1; intCtr++)
{
intAscSum = intAscSum + (chrArray[intCtr]);
}
// Next, convert ASCII sum to a binary digit by:
// 1) taking the remainder of the ASCII sum divided by 2
// 2) Repeat until sum reaches 0
// 3) Pad to 16 digits with leading zeroes
do
{
strBinVal = (intAscSum % 2).ToString() + strBinVal;
intAscSum = intAscSum / 2;
} while (intAscSum > 0);
strBinVal = strBinVal.PadLeft(16, '0');
// Next, invert all bits in binary number.
chrArray = strBinVal.ToCharArray();
strInvBinVal = "";
for (intCtr = 0; intCtr <= chrArray.Length - 1; intCtr++)
{
if (chrArray[intCtr] == '0') { strInvBinVal = strInvBinVal + '1'; }
else { strInvBinVal = strInvBinVal + '0'; }
}
// Next, add 1 to the inverted binary digit. Loop from least significant digit (rightmost) to most (leftmost);
// if digit is 1, flip to 0 and retain carry bit to next significant digit.
blnCarryBit = true;
chrArray = strInvBinVal.ToCharArray();
for (intCtr = chrArray.Length - 1; intCtr >= 0; intCtr--)
{
if (blnCarryBit == true)
{
if (chrArray[intCtr] == '0')
{
chrArray[intCtr] = '1';
blnCarryBit = false;
}
else
{
chrArray[intCtr] = '0';
blnCarryBit = true;
}
}
strNewBinVal = chrArray[intCtr] + strNewBinVal;
}
// Finally, convert binary digit to hex value, append to original SIP message.
return strMsg + (Convert.ToInt16(strNewBinVal, 2)).ToString("X");
}
}
}
Replace the code
for (intCtr = 0; intCtr <= chrArray.Length - 1; intCtr++)
{
intAscSum = intAscSum + (chrArray[intCtr]);
}
chrArray[intCtr] is input ASCII String to ouput the ASCII code in decimal, for example "A" is 65. ASCII encoding only uses 1 byte. UTF-8 uses one byte or more than one byte to represent the UTF-8 char. I think chrArray[intCtr] is designed for ASCII - thus the input of UTF-8 (more than one byte) is not reasonable.
With
int i = 0;
for (i = 0; i < bytes.Length; i++)
{
intAscSum = intAscSum + bytes[i];
}
byte[] bytes = Encoding.UTF8.GetBytes(strMsg);
Add up all the bytes, because one UTF8 char can be more than one byte.
Console.WriteLine("Please enter a decimal number:");
int decNumber = int.Parse(Console.ReadLine());
string binary = Convert.ToString((long)decNumber, 2);
Console.WriteLine("\n" + "The binary conversion of the number {0} is: {1}", decNumber, binary);
Console.WriteLine("\n" + "Please select a bit position: ");
int position = int.Parse(Console.ReadLine());
Console.WriteLine("\n" + "Please select a new value to replace the old one: ");
int newValue = int.Parse(Console.ReadLine());
Hello,
Basically what I want this program to do is convert a decimal number to binary and then replace a nth in position value of the binary representation.
I really tried all sorts of things, but I just can't seem to find an elegant solution that actually works. Additional explanation would be helpful and no, this is not my homework.
char newValue = char.Parse(Console.ReadLine());
StringBuilder sb = new StringBuilder(binary);
sb[position] = newValue;
binary= sb.ToString();
Swapping bits in a integer involves some complex logical operations Swapping bits in a positive 32bit integer in C#, but BitArray can make it a bit easier:
static int swapBits(int i, int position1, int position2)
{
// convert int i to BitArray
int[] intArray = { i };
var bitArray = new System.Collections.BitArray(intArray);
// swap bits
var bit1 = bitArray[position1];
bitArray[position1] = bitArray[position2];
bitArray[position2] = bit1;
// convert bitArray to int i
bitArray.CopyTo(intArray, 0);
i = intArray[0];
return i;
}
Note that the positions start from 0 and from the right, so for example
int i = swapBits(3, 0, 2); // 3 becomes 6
I am writing a checksum for a manifest file for a courrier based system written in C# in the .NET environment.
I need to have an 8 digit field representing the checksum which is calculated as per the following:
Record Check Sum Algorithm
Form the 32-bit arithmetic sum of the products of
• the 7 low order bits of each ASCII character in the record
• the position of each character in the record numbered from 1 for the first character.
for the length of the record up to but excluding the check sum field itself :
Sum = Σi ASCII( ith character in the record ).( i )
where i runs over the length of the record excluding the check sum field.
After performing this calculation, convert the resultant sum to binary and split the 32 low order
bits of the Sum into eight blocks of 4 bits (octets). Note that each of the octets has a decimal
number value ranging from 0 to 15.
Add an offset of ASCII 0 ( zero ) to each octet to form an ASCII code number.
Convert the ASCII code number to its equivalent ASCII character thus forming printable
characters in the range 0123456789:;<=>?.
Concatenate each of these characters to form a single string of eight (8) characters in overall
length.
I am not the greatest at mathematics so I am struggling to write the code correctly as per the documentation.
I have written the following so far:
byte[] sumOfAscii = null;
for(int i = 1; i< recordCheckSum.Length; i++)
{
string indexChar = recordCheckSum.ElementAt(i).ToString();
byte[] asciiChar = Encoding.ASCII.GetBytes(indexChar);
for(int x = 0; x<asciiChar[6]; x++)
{
sumOfAscii += asciiChar[x];
}
}
//Turn into octets
byte firstOctet = 0;
for(int i = 0;i< sumOfAscii[6]; i++)
{
firstOctet += recordCheckSum;
}
Where recordCheckSum is a string made up of deliveryAddresses, product names etc and excludes the 8-digit checksum.
Any help with calculating this would be greatly appreciated as I am struggling.
There are notes in line as I go along. Some more notes on the calculation at the end.
uint sum = 0;
uint zeroOffset = 0x30; // ASCII '0'
byte[] inputData = Encoding.ASCII.GetBytes(recordCheckSum);
for (int i = 0; i < inputData.Length; i++)
{
int product = inputData[i] & 0x7F; // Take the low 7 bits from the record.
product *= i + 1; // Multiply by the 1 based position.
sum += (uint)product; // Add the product to the running sum.
}
byte[] result = new byte[8];
for (int i = 0; i < 8; i++) // if the checksum is reversed, make this:
// for (int i = 7; i >=0; i--)
{
uint current = (uint)(sum & 0x0f); // take the lowest 4 bits.
current += zeroOffset; // Add '0'
result[i] = (byte)current;
sum = sum >> 4; // Right shift the bottom 4 bits off.
}
string checksum = Encoding.ASCII.GetString(result);
One note, I use the & and >> operators, which you may or may not be familiar with. The & operator is the bitwise and operator. The >> operator is logical shift right.
I am trying to do string manipulation. Here's my C# code :
static void Main(string[] args)
{
string input;
string output;
int length;
Console.WriteLine("input = ");
input = Console.ReadLine();
length = input.Length;
if ((input != "") || (length != 0))
{
Random randem = new Random();
int i = -1; //because I do not want the first number to be replaced by the random number
char[] characters = input.ToCharArray();
while (i < length)
{
int num = randem.Next(0, 9);
char num1 = Convert.ToChar(num);
i = i + 2; //so that every next character will be replaced by random number.. :D
characters[i] = num1; //*error* here
}
output = new string(characters);
Console.WriteLine(output);
}
For example:
User input : "i_love_to_eat_fish"
Desired output : "i2l4v1_9o5e8t7f8s2"
notice that the only unchanged character in
the char[] characters is : "i l v _ o e t f s". (desired output from the program)
I've already tried using this code, but still,
keep getting error at characters[i] = num1;
Am I on the right track?
I'm guessing the error you get is IndexOutOfRangeException this is because of the i = i + 2;. The while makes sure that i is less than length, but then adding 2 could result in it being more. Just add a check that it isn't beyond the length.
i = i + 2;
if(i < length)
characters[i] = num1;
Or just change to a for loop.
Random randem = new Random();
char[] characters = input.ToCharArray();
for(int i = 1; i < length; i += 2)
{
int num = randem.Next(1, 10); // max value is exclusive
char num1 = num.ToString()[0];
characters[i] = num1;
}
output = new string(characters);
Console.WriteLine(output);
Also as Shar1er80 points out you're currently converting the digit to the char that has the same ASCII value, and not the the actual characters that represent the digit. The digits 0-9 are represented by the the values 48-57. You can change the call to Random.Next to be:
int num = randem.Next(48, 58); // The upper bound is exclusive, not inclusive
char num1 = (char)num;
Or as Shar1er80 does it
int num = randem.Next(0,10) // Assumming you want digits 0-9
char num1 = num.ToString[0];
Also note that the max value for Random.Next is exclusive, so if you want to include the possibility of using a 9 you have to use an upper bound that is 1 greater than the greatest value you want.
Whenever you reach i = 17 you add 2 to i . That makes i = 19 with length of input equal to 18 that causes out of range exception.
The error you are getting is IndexOutOfTheRangeException, which explains everthing in itself.
It means that index you are feeding to array in the loop is going beyond its length-1 (as arrays have 0-based indexing)
So when you do i+2, you need to check if i+2 is not exceeding i.length-1 at any point of time; which does in your loop.
In general just check if you are supplying indexes between 0 and Array.Length-1
its because you start at index -1, and characters doesn't contain an index of -1.
EDIT: Sorry no the corrct answer is it must be while(i < length - 2)
Change this line
char num1 = Convert.ToChar(num);
To
char num1 = num.ToString()[0];
Then... Put
characters[i] = num1;
In an if block
if (i < length)
characters[i] = num1;