Convert large binary string to decimal string - c#

I've got a large text, containing a number as a binary value. e.g '123' would be '001100010011001000110011'. EDIT: should be 1111011
Now I want to convert it to the decimal system, but the number is too large for Int64.
So, what I want: Convert a large binary string to decimal string.

This'll do the trick:
public string BinToDec(string value)
{
// BigInteger can be found in the System.Numerics dll
BigInteger res = 0;
// I'm totally skipping error handling here
foreach(char c in value)
{
res <<= 1;
res += c == '1' ? 1 : 0;
}
return res.ToString();
}

Related

C# Cannot convert `char' expression to type `string' [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I have this code:
public class Kata
{
public static bool Narcissistic(int value)
{
//add variale to hold final result
int finalResult = 0;
//get the length of the value input
int valLength = value.ToString().Length;
//convert value given into an array of ints
string valString = value.ToString();
//iterate over each number and multiply that number by the length of value input
for (int i=0; i < valLength; i++) {
//convert char at index[i] of stringified value to int and mutiply by # of digits, store to result
finalResult += int.Parse(valString[i]) * valLength;
}
//return the result
return finalResult == value;
}
}
I'm getting an error when I run this that I understand, but don't quite know how to fix. My goal is to take a number (i.e 1234) and multiply each number by the total number of digits it contains (i.e 1*4 + 2*4 + 3*4...etc).
Assuming you have just Latin digit characters 0 to 9, you could use (valString[i] - '0') instead of int.Parse(valString[i]).
valString[i] is a char, not a string (which int.Parse() would expect). And chars are automatically converted to their Unicode integer code in C#, and as thhe digits 0 to 9 have consecutive Unicode values, '0' - '0' would be the Unicode value of the character for the digit 0 minus itself, i. e. 0, and so on up to '9' - '0' being 9 more than 0, i. e. 9.
This code would not work if there could be non Latin digits or non digit characters in your string. And looking at your complete code, the assumption should be met, as actually, you are converting from an int.
Here's a better option that avoids strings altogether
public static bool Narcissistic(int value)
{
if(value < 0) return false;
int sum = 0;
int count = 0;
while(value>0)
{
sum += value%10;
value /= 10;
count++;
}
return value == (sum*count);
}
Basically the value%10 will give you the least significate digit. Then the value /= 10; will truncate that digit. Once value is zero you've seen all the digits. And your formula (4*1 + 4*2 + 4*3 + 4*4) can of course just be 4*(1+2+3+4), which is the sum of the digits times the number of digits.
And presumably no negative number would be "Narcissistic" so you can just return false for them.
valString[i] will be a char - because that's what indexing does to a string. int.Parse expects a string. The simplest thing to do would probably just
finalResult += int.Parse(valString[i].ToString()) * valLength;
Assuming to turn a single character digit into a one-digit integer is your intention.
This is less "dangerous" than the traditional hack of subtracting '0' because you can rely on int.Parse to throw an exception if the char is anything but numeric.
You will also need to make the input argument uint, or define a behavior for negative numbers.
int.Parse() expects a string, but valString[i] is a character. You could just change this to valString[i].ToString(), but rather than parsing you can take advantage of the fact that the digit characters are encoded in sequence by casting them to int and subtracting the value of the 0 integer:
public static bool Narcissistic(int value)
{
int zero = (int)'0';
string valueString = value.ToString();
int total = 0;
foreach(char c in valueString)
{
int digit = ((int)c - zero);
total += digit * valueString.Length;
}
return total == value;
}
or
public static bool Narcissistic(int value)
{
int zero = (int)'0';
string valueString = value.ToString();
return valueString.Select(c => (c - zero) * valueString.Length).Sum() == value;
}
If you really want to you can write the second option as a one-liner:
public static bool Narcissistic(int value)
{
return value.ToString().Select(c => c - '0').Sum() * value.ToString().Length == value;
}

Convert BigInteger Binary to BigInteger Number?

Currently i am using Long integer type. I used the following to convert from/to binary/number:
Convert.ToInt64(BinaryString, 2); //Convert binary string of base 2 to number
Convert.ToString(LongNumber, 2); //Convert long number to binary string of base 2
Now the numbers i am using have exceeded 64 bit, so is started using BigInteger. I can't seem to find the equivalent of the code above.
How can i convert from a BinaryString that have over 64bits to a BigInteger number and vice versa ?
Update:
The references in the answer contains the answer i want but i am having some trouble in the conversion from Number to Binary.
I have used the following code which is available in the first reference:
public static string ToBinaryString(this BigInteger bigint)
{
var bytes = bigint.ToByteArray();
var idx = bytes.Length - 1;
// Create a StringBuilder having appropriate capacity.
var base2 = new StringBuilder(bytes.Length * 8);
// Convert first byte to binary.
var binary = Convert.ToString(bytes[idx], 2);
// Ensure leading zero exists if value is positive.
if (binary[0] != '0' && bigint.Sign == 1)
{
base2.Append('0');
}
// Append binary string to StringBuilder.
base2.Append(binary);
// Convert remaining bytes adding leading zeros.
for (idx--; idx >= 0; idx--)
{
base2.Append(Convert.ToString(bytes[idx], 2).PadLeft(8, '0'));
}
return base2.ToString();
}
The result i got is wrong:
100001000100000000000100000110000100010000000000000000000000000000000000 ===> 2439583056328331886592
2439583056328331886592 ===> 0100001000100000000000100000110000100010000000000000000000000000000000000
If you put the resulted binary string under each other, you will notice that the conversion is correct and that the problem is that there is a leading zero from the left:
100001000100000000000100000110000100010000000000000000000000000000000000
0100001000100000000000100000110000100010000000000000000000000000000000000
I tried reading the explanation provided in the code and changing it but no luck.
Update 2:
I was able to solve it by changing the following in the code:
// Ensure leading zero exists if value is positive.
if (binary[0] != '0' && bigint.Sign == 1)
{
base2.Append('0');
// Append binary string to StringBuilder.
base2.Append(binary);
}
Unfortunately, there is nothing built-in in the .NET framework.
Fortunately, the StackOverflow community has already solved both problems:
BigInteger -> Binary: BigInteger to Hex/Decimal/Octal/Binary strings?
Binary -> BigInteger: C# Convert large binary string to decimal system
There is a good reference on MSDN about BigIntegers. Can you check it?
https://msdn.microsoft.com/en-us/library/system.numerics.biginteger(v=vs.110).aspx
Also there is a post to convert from binary to biginteger Conversion of a binary representation stored in a list of integers (little endian) into a Biginteger
This example is from MSDN.
string positiveString = "91389681247993671255432112000000";
string negativeString = "-90315837410896312071002088037140000";
BigInteger posBigInt = 0;
BigInteger negBigInt = 0;
try {
posBigInt = BigInteger.Parse(positiveString);
Console.WriteLine(posBigInt);
}
catch (FormatException)
{
Console.WriteLine("Unable to convert the string '{0}' to a BigInteger value.",
positiveString);
}
if (BigInteger.TryParse(negativeString, out negBigInt))
Console.WriteLine(negBigInt);
else
Console.WriteLine("Unable to convert the string '{0}' to a BigInteger value.",
negativeString);
// The example displays the following output:
// 9.1389681247993671255432112E+31
// -9.0315837410896312071002088037E+34

Counting Precision Digits

How to count precision digits on a C# decimal type?
e.g. 12.001 = 3 precision digits.
I would like to thrown an error is a precision of greater than x is present.
Thanks.
public int CountDecPoint(decimal d){
string[] s = d.ToString().Split('.');
return s.Length == 1 ? 0 : s[1].Length;
}
Normally the decimal separator is ., but to deal with different culture, this code will be better:
public int CountDecPoint(decimal d){
string[] s = d.ToString().Split(Application.CurrentCulture.NumberFormat.NumberDecimalSeparator[0]);
return s.Length == 1 ? 0 : s[1].Length;
}
You can get the "scale" of a decimal like this:
static byte GetScale(decimal d)
{
return BitConverter.GetBytes(decimal.GetBits(d)[3])[2];
}
Explanation: decimal.GetBits returns an array of four int values of which we take only the last one. As described on the linked page, we need only the second to last byte from the four bytes that make up this int, and we do that with BitConverter.GetBytes.
Examples: The scale of the number 3.14m is 2. The scale of 3.14000m is 5. The scale of 123456m is 0. The scale of 123456.0m is 1.
If the code may run on a big-endian system, it is likely that you have to modify to BitConverter.GetBytes(decimal.GetBits(d)[3])[BitConverter.IsLittleEndian ? 2 : 1] or something similar. I have not tested that. See the comments by relatively_random below.
I know I'm resurrecting an ancient question, but here's a version that doesn't rely on string representations and actually ignores trailing zeros. If that's even desired, of course.
public static int GetMinPrecision(this decimal input)
{
if (input < 0)
input = -input;
int count = 0;
input -= decimal.Truncate(input);
while (input != 0)
{
++count;
input *= 10;
input -= decimal.Truncate(input);
}
return count;
}
I would like to thrown an error is a precision of greater than x is present
This looks like the simplest way:
void AssertPrecision(decimal number, int decimals)
{
if (number != decimal.Round(number, decimals, MidpointRounding.AwayFromZero))
throw new Exception()
};

EBCDIC to ASCII conversion, handling numeric values

I’m attempting to convert files from ECDIC to ASCII format and have run into an interesting issue. The files contain fixed length records with some fields being signed binary integers (described as B4 in the record layout), and long-precision numeric values (described as L8 in the record layout). I’ve been able to convert character data with no problem, but I’m not sure how to go about converting these numeric values. From a reference manual for the original system (an IBM 5110), the fields are described below.
B indicates the length (2, 4, or 8 bytes) of numeric data items in
fixed-point signed binary integer format that are to be converted to
BASIC internal data format. For record I/O file input, the next 2,
4, or 8 bytes in the record contain a signed binary value to be
converted by the system into internal data format and assigned to the
variable(s) specified in the READ FILE or REREAD FILE statement using
a FORM statement.
and
L indicates long-precision (8 characters) for numeric values. For
input, this entry indicates that an eight-position, long-precision
value in the record is to be assigned without conversion to a
corresponding numeric variable specified in the READ FILE or REREAD
FILE statement.
EDIT: Here's the code I'm using for the conversion
private void ConvertFile(EbcdicFile file)
{
if (file == null) return;
var filePath = Path.Combine(file.Path, file.FileName);
if (!File.Exists(filePath))
{
this.Logger.Info(string.Format("Cannot convert file {0}. It does not exist.", filePath));
return;
}
var ebcdic = Encoding.GetEncoding(37);
string convertedFilepath = Path.Combine(file.Path, file.ConvertedFileName);
byte[] fileData = File.ReadAllBytes(filePath);
if (!file.HasNumericFields)
File.WriteAllBytes(convertedFilepath, Encoding.Convert(ebcdic, Encoding.ASCII, fileData));
else
{
var convertedFileData = new List<byte>();
for (int position = 0; position < fileData.Length; position += file.RecordLength)
{
var segment = new ArraySegment<byte>(fileData, position, file.RecordLength);
file.Fields.ForEach(field =>
{
var fieldSegment = segment.Array.Skip(segment.Offset + field.Start - 1).Take(field.Length);
if (field.Type.Equals("string", StringComparison.OrdinalIgnoreCase))
{
convertedFileData.AddRange(
Encoding.Convert(ebcdic, Encoding.ASCII, fieldSegment.ToArray())
);
}
else if (field.Type.Equals("B4", StringComparison.OrdinalIgnoreCase))
{
// Not sure how to convert this field
}
else if (field.Type.Equals("L8", StringComparison.OrdinalIgnoreCase))
{
// Not sure how to convert this field
}
});
}
File.WriteAllBytes(convertedFilepath, convertedFileData.ToArray());
}
}
You must first know the fixed record size. Use FileStream.Read() to read one record worth of bytes. Then Encoding.GetString() to convert it to a string.
Then fish the fields out of the record using String.SubString(). A B4 is simply a SubString call with a length of 4, L8 with a length of 8. Further convert such a field to a number with Decimal.Parse(). You may have to divide the result, it wasn't clear what fixed-point multiplier is used. Good odds for 100.
Okay, so I've figured out how to convert both fields. B4 fields are very straightforward. They are essentially a 4-byte array which can be converted to an integer.
//The IBM 5110 were big endian machines, so reverse the array
if (BitConverter.IsLittleEndian)
Array.Reverse(by);
int value = BitConverter.ToInt32(by, 0);
The L8 fields are 8-bytes arrays that represented an IBM Double Precision Float. There are many ways this can be converted to an IEEE 754 Float. A few examples can be found at:
How To Read IBM 370 Data from a Binary File
Transform between IEEE, IBM or VAX floating point number formats and bytes expressions
Here's the version I used based on guidance from the articles.
private double IbmFloatToDouble(byte[] value)
{
if (ReferenceEquals(null, value))
throw new ArgumentNullException("value");
if (BitConverter.ToInt64(value, 0) == 0)
return 0;
int exponentBias = 64;
int ibmBase = 16;
double sign = 0.0D;
int signValue = (value[0] & 0x80) >> 7;
int exponentValue = (value[0] & 0x7f);
double fraction1 = (value[1] << 16) + (value[2] << 8) + value[3];
double fraction2 = (value[4] << 24) + (value[5] << 16) + (value[6] << 8) + value[7];
double exponent24 = 16777216.0; // 2^24
double exponent56 = 72057594037927936.0; // 2^56
double mantissa1 = fraction1 / exponent24;
double mantissa2 = fraction2 / exponent56;
double mantissa = mantissa1 + mantissa2;
double exponent = Math.Pow(ibmBase, exponentValue - exponentBias);
if (signValue == 0)
sign = 1.0;
else
sign = -1.0;
return (sign * mantissa * exponent);
}

How i convert decimal number with point to binary number in c#?

In this web i see that all answer about thins like converting decimal number to binary
its refers number without point in the number(int)...
i want to know how to convert the decimal number with point like "332.434" to binary in c#
exemple i see:
using System;
namespace _01.Decimal_to_Binary
{
class DecimalToBinary
{
static void Main(string[] args)
{
Console.Write("Decimal: ");
int decimalNumber = int.Parse(Console.ReadLine());
int remainder;
string result = string.Empty;
while (decimalNumber > 0)
{
remainder = decimalNumber % 2;
decimalNumber /= 2;
result = remainder.ToString() + result;
}
Console.WriteLine("Binary: {0}",result);
}
}
}
the exemple refer to convert from int without point
thank
Just use a BitConverter to get the bytes then loop over them converting those to strings and appending the current string of bits to the previous one.
byte[] byteArray = BitConverter.GetBytes(MyDouble);
string ByteString = System.String.Empty;
for (int i = 0; i < byteArray.Length; i++)
ByteString = Convert.ToString(byteArray[i], 2).PadLeft(8, '0');
You may have to do some tinkering to get the bits in the correct order but I assume BysteString will have the high order bits on the left. Here's the MSDN page for that ToString method http://msdn.microsoft.com/en-us/library/8s62fh68.aspx
You can't simply convert non integer number to a binary format. E.g. for 3.145926 a computer keeps a sign (+/-), a number itself but with a lead zero alway (0.3141596) and a mantissa (E-1). So you need to keep all 3 parts. Read more in wikipedia http://en.wikipedia.org/wiki/Floating_point#Representable_numbers.2C_conversion_and_rounding

Categories

Resources