I am using Jon Skeet's EBCDIC implementation in .NET to read a VSAM file downloaded in binary mode with FTP from a mainframe system. It works very well for reading/writing in this encoding, but it does not have anything to read packed-decimal values. My file contains these, and I need to unpack them (at the cost of more bytes, obviously).
How can I do this?
My fields are defined as PIC S9(7)V99 COMP-3.
Ahh, BCD. Honk if you used it in 6502 assembly.
Of course, the best bet is to let the COBOL MOVE do the job for you! One of these possibilities may help.
(Possibility #1) Assuming you do have access to the mainframe and the source code, and the output file is ONLY for your use, modify the program so it just MOVEs the value to a plain unpacked PIC S9(7)V99.
(Possibility #2) Assuming it's not that easy, (e.g., file is input for other pgms, or can't change the code), you can write another COBOL program on the system that reads that file and writes another. Cut and paste the file record layout with the BCD into the new program for input and output files. Modify the output version to be non-packed. Read a record, do a 'move corresponding' to transfer the data, and write, until eof. Then transfer that file.
(Possibility #3) If you can't touch the mainframe, note the description in the article you linked in your comment. BCD is relatively simple. It could be as easy as this (vb.net):
Private Function FromBCD(ByVal BCD As String, ByVal intsz As Integer, ByVal decsz As Integer) As Decimal
Dim PicLen As Integer = intsz + decsz
Dim result As Decimal = 0
Dim val As Integer = Asc(Mid(BCD, 1, 1))
Do While PicLen > 0
result *= 10D
result += val \ 16
PicLen -= 1
If PicLen > 0 Then
result *= 10D
result += val Mod 16
PicLen -= 1
BCD = Mid(BCD, 2)
End If
val = Asc(Mid(BCD, 1, 1))
Loop
If val Mod 16 = &HD& Then
result = -result
End If
Return result / CDec(10 ^ decsz)
End Function
I tested it with a few variations of this call:
MsgBox(FromBCD("#" & Chr(13 + 16), 2, 1))
E.g., is -40.1. But just a few. So it might still be wrong.
So then if your comp-3 starts, say, at byte 10 of the input record layout, this would solve it:
dim valu as Decimal = FromBCD(Mid(InputLine,10,5), 7,2))
Noting the formulas from the data-conversion article for the # of bytes to send in, and the # of 9's before and after the V.
Store the result in a Decimal to avoid rounding errors. Esp if it's $$$. Float & Double WILL cause you grief! If you're not processing it, even a string is better.
of course it could be harder. Where I work, the mainframe is 9 bits per byte. Serious. That's what makes the first 2 possibilities so salient. Of course what really makes them better is the fact the you may be a PC only programmer and this is a great excuse to get a mainframe programmer to do the work for you! If you are so lucky to have that option...
Peace,
-Al
I use this extension method for packed decimal (BCD) conversion:
/// <summary>
/// computes the actual decimal value from an IBM "Packed Decimal" 9(x)v99 (COBOL COMP-3) format
/// </summary>
/// <param name="value">byte[]</param>
/// <param name="precision">byte; decimal places, default 2</param>
/// <returns>decimal</returns>
public static decimal FromPackedDecimal(this byte[] value, byte precision = 2)
{
if (value.Length < 1)
{
throw new System.InvalidOperationException("Cannot unpack empty bytes.");
}
double power = System.Math.Pow(10, precision);
if (power > long.MaxValue)
{
throw new System.InvalidOperationException(
$"Precision too large for valid calculation: {precision}");
}
string hex = System.BitConverter.ToString(value).Replace("-", "");
var bytes = Enumerable.Range(0, hex.Length)
.Select(x => System.Convert.ToByte($"0{hex.Substring(x, 1)}", 16))
.ToList();
long place = 1;
decimal ret = 0;
for (int i = bytes.Count - 2; i > -1; i--)
{
ret += (bytes[i] * place);
place *= 10;
}
ret /= (long)power;
return (bytes.Last() & (1 << 7)) != 0 ? ret * -1 : ret;
}
Related
Note: This is more of a logic/math problem than a specific C# problem.
I have my own class called Number - it very simply contains two separate byte arrays called Whole and Decimal. These byte arrays each represent essentially an infinitely large whole number, but, when put together the idea is that they create a whole number with a decimal part.
The bytes are stored in a little-endian format, representing a number. I'm creating a method called AddNumbers which will add two of these Numbers together.
This method relies on another method called PerformAdd, which just adds two arrays together. It simply takes in a pointer to the final byte array, a pointer to one array to add, and a pointer to the second array to add - as well as the length of each of them. The two arrays are just named "larger" and "smaller". Here is the code for this method:
private static unsafe void PerformAdd(byte* finalPointer, byte* largerPointer, byte* smallerPointer, int largerLength, int smallerLength)
{
int carry = 0;
// Go through all the items that can be added, and work them out.
for (int i = 0; i < smallerLength; i++)
{
var add = *largerPointer-- + *smallerPointer-- + carry;
// Stick the result of this addition in the "final" array.
*finalPointer-- = (byte)(add & 0xFF);
// Now, set a carry from this.
carry = add >> 8;
}
// Now, go through all the remaining items (which don't need to be added), and add them to the "final" - still working with the carry.
for (int i = smallerLength; i < largerLength; i++)
{
var wcarry = *largerPointer-- + carry;
// Stick the result of this addition in the "final" array.
*finalPointer-- = (byte)(wcarry & 0xFF);
// Now, set a carry from this.
carry = wcarry >> 8;
}
// Now, if we have anything still left to carry, carry it into a new byte.
if (carry > 0)
*finalPointer-- = (byte)carry;
}
This method isn't where the problem lies - the problem is with how I use it. It's the AddNumbers method that uses it. The way it works is fine - it organizes the two separate byte arrays into the "larger" (larger meaning having a higher length of bytes) and "smaller". And then it creates pointers, it does this both for Whole and Decimal separately. The problem is with the decimal part.
Let's say we're adding the numbers 1251 and 2185 together, in this situation you would get 3436 - so that works perfectly!
Take another example as well: You have the numbers 4.6 and add 1.2 - once again, this works fine, and you get 5.8. The problem comes with the next example.
We have 15.673 and 1.783, you would expect 17.456, however, actually, this returns: 16.1456, and the reason for that is because it doesn't carry the "1".
So, this is my problem: How would I implement a way that knows when and how to do this? Here's the code for my AddNumbers method:
public static unsafe Number AddNumbers(Number num1, Number num2)
{
// Store the final result.
Number final = new Number(new byte[num1.Whole.Length + num2.Whole.Length], new byte[num1.Decimal.Length + num2.Decimal.Length]);
// We're going to figure out which number (num1 or num2) has more bytes, and then we'll create pointers to smallest and largest.
fixed (byte* num1FixedWholePointer = num1.Whole, num1FixedDecPointer = num1.Decimal, num2FixedWholePointer = num2.Whole, num2FixedDecPointer = num2.Decimal,
finalFixedWholePointer = final.Whole, finalFixedDecimalPointer = final.Decimal)
{
// Create a pointer and figure out which whole number has the most bytes.
var finalWholePointer = finalFixedWholePointer + (final.Whole.Length - 1);
var num1WholeLarger = num1.Whole.Length > num2.Whole.Length ? true : false;
// Store the larger/smaller whole number lengths.
var largerLength = num1WholeLarger ? num1.Whole.Length : num2.Whole.Length;
var smallerLength = num1WholeLarger ? num2.Whole.Length : num1.Whole.Length;
// Create pointers to the whole numbers (the largest amount of bytes and smallest amount of bytes).
var largerWholePointer = num1WholeLarger ? num1FixedWholePointer + (num1.Whole.Length - 1) : num2FixedWholePointer + (num2.Whole.Length - 1);
var smallerWholePointer = num1WholeLarger ? num2FixedWholePointer + (num2.Whole.Length - 1) : num1FixedWholePointer + (num1.Whole.Length - 1);
// Handle decimal numbers.
if (num1.Decimal.Length > 0 || num2.Decimal.Length > 0)
{
// Create a pointer and figure out which decimal has the most bytes.
var finalDecPointer = finalFixedDecimalPointer + (final.Decimal.Length - 1);
var num1DecLarger = num1.Decimal.Length > num2.Decimal.Length ? true : false;
// Store the larger/smaller whole number lengths.
var largerDecLength = num1DecLarger ? num1.Decimal.Length : num2.Decimal.Length;
var smallerDecLength = num1DecLarger ? num2.Whole.Length : num1.Decimal.Length;
// Store pointers for decimals as well.
var largerDecPointer = num1DecLarger ? num1FixedDecPointer + (num1.Decimal.Length - 1) : num2FixedDecPointer + (num2.Decimal.Length - 1);
var smallerDecPointer = num1DecLarger ? num2FixedDecPointer + (num2.Decimal.Length - 1) : num1FixedDecPointer + (num1.Decimal.Length - 1);
// Add the decimals first.
PerformAdd(finalDecPointer, largerDecPointer, smallerDecPointer, largerDecLength, smallerDecLength);
}
// Add the whole number now.
PerformAdd(finalWholePointer, largerWholePointer, smallerWholePointer, largerLength, smallerLength);
}
return final;
}
The format you selected is fundamentally hard to use and I'm not aware of anyone who uses the same format for this task. For example, multiplication or division in that format must be very hard to implement.
Actually I don't think you store enough information to uniquely restore the value in the first place. How in your format stored representations are different for 0.1 and 0.01? I don't think you can distinguish those two values.
The issue you are facing is a lesser side-effect of the same problem: you store binary representations for decimal values and expect to be able to imply unique size (number of digits) of the decimal representation. You can't do it because when decimal overflow happens you are not guaranteed to get an overflow in your 256-based stored value as well. Actually it is more often not to happen simultaneously.
I don't think you can resolve this issue in any other way than explicitly storing something equivalent to the number of digits after the decimal point. And if you are going to do that anyway, why not switch to a much simpler format of a single BigInteger (yes, it is a part of the standard library although there is nothing like BigDecimal) and a scale? This is the format used by many similar libraries. In that format 123.45 is stored as pair of 12345 and -2 (for decimal position) while 1.2345 is stored as a pair of 12345 and -4. Multiplication in that format is almost a trivial task (given that BigInteger already implements multiplication, so you just need to be able to truncate zeros at the end). Addition and subtraction are less trivial but what you need is first match the scales of the two numbers using multiplication by 10, then use standard addition over BigInteger and then normalize back (remove zeros at the end). Division is still hard and you have to decide what rounding strategies you want support because division of two numbers is not guaranteed to fit into a number of a fixed precision.
If you just need BigDecimal in C# I would just suggest to find and use an existing implementation. For example https://gist.github.com/nberardi/2667136 (I am not the author, but it seems fine).
If you HAVE to implement it for any reason (school, etc) even then I would just resort to using BigInteger.
If you have to implement it with byte arrays... You can still benefit from the idea of using scale. You obviously have to take any extra digits after your operations such as "PerformAdd" and then carry them over to the main number.
However problems don't stop there. When you begin implementing multiplication you will run into more issues and you will have to start to mix decimal and integer part inevitably.
8.73*0.11 -> 0.9603
0.12*0.026 -> 0.00312
As you can see integer and decimal parts mix up and then decimal part grows into a longer sequence
however if you represent these as:
873|2 * 11|2 -> 873*11|4 -> 9603|4 -> 0.9603
12|2 & 26|3 -> 12*26|5 -> 312|5 -> 0.00312
these problems disappear.
I need help trying to verify CRC-16 values (also need help with CRC-32 values). I tried to sit down and understand how CRC works but I am drawing a blank.
My first problem is when trying to use an online calculator for calculating the message "BD001325E032091B94C412AC" into CRC16 = 12AC. The documentation states that the last two octets are the CRC16 value, so I am inputting "BD001325E032091B94C4" into the site http://www.lammertbies.nl/comm/info/crc-calculation.html and receive 5A90 as the result instead of 12AC.
Does anybody know why these values are different and where I can find code for how to calculate CRC16 and CRC32 values (I plan to later learn how to do this but times doesn't allow right now)?
Some more messages are as following:
16000040FFFFFFFF00015FCB
3C00003144010405E57022C7
BA00001144010101B970F0ED
3900010101390401B3049FF1
09900C800000000000008CF3
8590000000000000000035F7
00900259025902590259EBC9
0200002B00080191014BF5A2
BB0000BEE0014401B970E51E
3D000322D0320A2510A263A0
2C0001440000D60000D65E54
--Edit--
I have included more information. The documentation I was referencing is TIA-102.BAAA-A (from the TIA standard). The following is what the documentation states (trying to avoid copyright infringement as much as possible):
The Last Block in a packet comprises several octets of user information and / or
pad octets, followed by a 4-octet CRC parity check. This is referred to as the
packet CRC.
The packet CRC is a 4-octet cyclic redundancy check coded over all of the data
octets included in the Intermediate Blocks and the octets of user information of
the Last Block. The specific calculation is as follows.
Let k be the total number of user information and pad bits over which the packet
CRC is to be calculated. Consider the k message bits as the coefficients of a
polynomial M(x) of degree k–1, associating the MSB of the zero-th message
octet with x^k–1 and the LSB of the last message octet with x^0. Define the
generator polynomial, GM(x), and the inversion polynomial, IM(x).
GM(x) = x^32 + x^26 + x^23 + x^22 + x^16 + x^12 + x^11 + x^10 + x^8 + x^7 + x^5 +
x^4 + x^2 + x + 1
IM(x) = x^31 + x^30 + x^29 + ... + x^2 + x +1
The packet CRC polynomial, FM(x), is then computed from the following formula.
FM(x) = ( x^32 M(x) mod GM(x) ) + IM(x) modulo 2, i.e., in GF(2)
The coefficients of FM(x) are placed in the CRC field with the MSB of the zero-th
octet of the CRC corresponding to x^31 and the LSB of the third octet of the CRC
corresponding to x^0.
In the above quote, I have put ^ to show powers as the formatting didn't stay the same when quoted. I'm not sure what goes to what but does this help?
I have a class I converted from a C++ I found in internet, it uses a long to calculate a CRC32. It adhere to the standard and is the one use by PKZIP, WinZip and Ethernet. To test it, use Winzip and compress a file then calculate the same file with this class, it should return the same CRC. It does for me.
public class CRC32
{
private int[] iTable;
public CRC32() {
this.iTable = new int[256];
Init();
}
/**
* Initialize the iTable aplying the polynomial used by PKZIP, WINZIP and Ethernet.
*/
private void Init()
{
// 0x04C11DB7 is the official polynomial used by PKZip, WinZip and Ethernet.
int iPolynomial = 0x04C11DB7;
// 256 values representing ASCII character codes.
for (int iAscii = 0; iAscii <= 0xFF; iAscii++)
{
this.iTable[iAscii] = this.Reflect(iAscii, (byte) 8) << 24;
for (int i = 0; i <= 7; i++)
{
if ((this.iTable[iAscii] & 0x80000000L) == 0) this.iTable[iAscii] = (this.iTable[iAscii] << 1) ^ 0;
else this.iTable[iAscii] = (this.iTable[iAscii] << 1) ^ iPolynomial;
}
this.iTable[iAscii] = this.Reflect(this.iTable[iAscii], (byte) 32);
}
}
/**
* Reflection is a requirement for the official CRC-32 standard. Note that you can create CRC without it,
* but it won't conform to the standard.
*
* #param iReflect
* value to apply the reflection
* #param iValue
* #return the calculated value
*/
private int Reflect(int iReflect, int iValue)
{
int iReturned = 0;
// Swap bit 0 for bit 7, bit 1 For bit 6, etc....
for (int i = 1; i < (iValue + 1); i++)
{
if ((iReflect & 1) != 0)
{
iReturned |= (1 << (iValue - i));
}
iReflect >>= 1;
}
return iReturned;
}
/**
* PartialCRC caculates the CRC32 by looping through each byte in sData
*
* #param lCRC
* the variable to hold the CRC. It must have been initialize.
* <p>
* See fullCRC for an example
* </p>
* #param sData
* array of byte to calculate the CRC
* #param iDataLength
* the length of the data
* #return the new caculated CRC
*/
public long CalculateCRC(long lCRC, byte[] sData, int iDataLength)
{
for (int i = 0; i < iDataLength; i++)
{
lCRC = (lCRC >> 8) ^ (long) (this.iTable[(int) (lCRC & 0xFF) ^ (int) (sData[i] & 0xff)] & 0xffffffffL);
}
return lCRC;
}
/**
* Caculates the CRC32 for the given Data
*
* #param sData
* the data to calculate the CRC
* #param iDataLength
* then length of the data
* #return the calculated CRC32
*/
public long FullCRC(byte[] sData, int iDataLength)
{
long lCRC = 0xffffffffL;
lCRC = this.CalculateCRC(lCRC, sData, iDataLength);
return (lCRC /*& 0xffffffffL)*/^ 0xffffffffL);
}
/**
* Calculates the CRC32 of a file
*
* #param sFileName
* The complete file path
* #param context
* The context to open the files.
* #return the calculated CRC32 or -1 if an error occurs (file not found).
*/
long FileCRC(String sFileName, Context context)
{
long iOutCRC = 0xffffffffL; // Initilaize the CRC.
int iBytesRead = 0;
int buffSize = 32 * 1024;
FileInputStream isFile = null;
try
{
byte[] data = new byte[buffSize]; // buffer de 32Kb
isFile = context.openFileInput(sFileName);
try
{
while ((iBytesRead = isFile.read(data, 0, buffSize)) > 0)
{
iOutCRC = this.CalculateCRC(iOutCRC, data, iBytesRead);
}
return (iOutCRC ^ 0xffffffffL); // Finalize the CRC.
}
catch (Exception e)
{
// Error reading file
}
finally
{
isFile.close();
}
}
catch (Exception e)
{
// file not found
}
return -1l;
}
}
Read Ross Williams tutorial on CRCs to get a better understanding of CRC's, what defines a particular CRC, and their implementations.
The reveng website has an excellent catalog of known CRCs, and for each the CRC of a test string (nine bytes: "123456789" in ASCII/UTF-8). Note that there are 22 different 16-bit CRCs defined there.
The reveng software on that same site can be used to reverse engineer the polynomial, initialization, post-processing, and bit reversal given several examples as you have for the 16-bit CRC. (Hence the name "reveng".) I ran your data through and got:
./reveng -w 16 -s 16000040FFFFFFFF00015FCB 3C00003144010405E57022C7 BA00001144010101B970F0ED 3900010101390401B3049FF1 09900C800000000000008CF3 8590000000000000000035F7 00900259025902590259EBC9 0200002B00080191014BF5A2 BB0000BEE0014401B970E51E 3D000322D0320A2510A263A0 2C0001440000D60000D65E54
width=16 poly=0x1021 init=0xc921 refin=false refout=false xorout=0x0000 check=0x2fcf name=(none)
As indicated by the "(none)", that 16-bit CRC is not any of the 22 listed on reveng, though it is similar to several of them, differing only in the initialization.
The additional information you provided is for a 32-bit CRC, either CRC-32 or CRC-32/BZIP in the reveng catalog, depending on whether the bits are reversed or not.
There are quite a few parameters to CRC calculations: Polynomial, initial value, final XOR... see Wikipedia for details. Your CRC does not seem to fit the ones on the site you used, but you can try to find the right parameters from your documentation and use a different calculator, e.g. this one (though I'm afraid it doesn't support HEX input).
One thing to keep in mind is that CRC-16 is usually calculated over the data that is supposed to be checksummed plus two zero-bytes, e.g. you are probably looking for a CRC16 function where CRC16(BD001325E032091B94C40000) == 12AC. With checksums calculated in this way, the CRC of the data with checksum appended will work out to 0, which makes checking easier, e.g. CRC16(BD001325E032091B94C412AC) == 0000
I’m attempting to convert files from ECDIC to ASCII format and have run into an interesting issue. The files contain fixed length records with some fields being signed binary integers (described as B4 in the record layout), and long-precision numeric values (described as L8 in the record layout). I’ve been able to convert character data with no problem, but I’m not sure how to go about converting these numeric values. From a reference manual for the original system (an IBM 5110), the fields are described below.
B indicates the length (2, 4, or 8 bytes) of numeric data items in
fixed-point signed binary integer format that are to be converted to
BASIC internal data format. For record I/O file input, the next 2,
4, or 8 bytes in the record contain a signed binary value to be
converted by the system into internal data format and assigned to the
variable(s) specified in the READ FILE or REREAD FILE statement using
a FORM statement.
and
L indicates long-precision (8 characters) for numeric values. For
input, this entry indicates that an eight-position, long-precision
value in the record is to be assigned without conversion to a
corresponding numeric variable specified in the READ FILE or REREAD
FILE statement.
EDIT: Here's the code I'm using for the conversion
private void ConvertFile(EbcdicFile file)
{
if (file == null) return;
var filePath = Path.Combine(file.Path, file.FileName);
if (!File.Exists(filePath))
{
this.Logger.Info(string.Format("Cannot convert file {0}. It does not exist.", filePath));
return;
}
var ebcdic = Encoding.GetEncoding(37);
string convertedFilepath = Path.Combine(file.Path, file.ConvertedFileName);
byte[] fileData = File.ReadAllBytes(filePath);
if (!file.HasNumericFields)
File.WriteAllBytes(convertedFilepath, Encoding.Convert(ebcdic, Encoding.ASCII, fileData));
else
{
var convertedFileData = new List<byte>();
for (int position = 0; position < fileData.Length; position += file.RecordLength)
{
var segment = new ArraySegment<byte>(fileData, position, file.RecordLength);
file.Fields.ForEach(field =>
{
var fieldSegment = segment.Array.Skip(segment.Offset + field.Start - 1).Take(field.Length);
if (field.Type.Equals("string", StringComparison.OrdinalIgnoreCase))
{
convertedFileData.AddRange(
Encoding.Convert(ebcdic, Encoding.ASCII, fieldSegment.ToArray())
);
}
else if (field.Type.Equals("B4", StringComparison.OrdinalIgnoreCase))
{
// Not sure how to convert this field
}
else if (field.Type.Equals("L8", StringComparison.OrdinalIgnoreCase))
{
// Not sure how to convert this field
}
});
}
File.WriteAllBytes(convertedFilepath, convertedFileData.ToArray());
}
}
You must first know the fixed record size. Use FileStream.Read() to read one record worth of bytes. Then Encoding.GetString() to convert it to a string.
Then fish the fields out of the record using String.SubString(). A B4 is simply a SubString call with a length of 4, L8 with a length of 8. Further convert such a field to a number with Decimal.Parse(). You may have to divide the result, it wasn't clear what fixed-point multiplier is used. Good odds for 100.
Okay, so I've figured out how to convert both fields. B4 fields are very straightforward. They are essentially a 4-byte array which can be converted to an integer.
//The IBM 5110 were big endian machines, so reverse the array
if (BitConverter.IsLittleEndian)
Array.Reverse(by);
int value = BitConverter.ToInt32(by, 0);
The L8 fields are 8-bytes arrays that represented an IBM Double Precision Float. There are many ways this can be converted to an IEEE 754 Float. A few examples can be found at:
How To Read IBM 370 Data from a Binary File
Transform between IEEE, IBM or VAX floating point number formats and bytes expressions
Here's the version I used based on guidance from the articles.
private double IbmFloatToDouble(byte[] value)
{
if (ReferenceEquals(null, value))
throw new ArgumentNullException("value");
if (BitConverter.ToInt64(value, 0) == 0)
return 0;
int exponentBias = 64;
int ibmBase = 16;
double sign = 0.0D;
int signValue = (value[0] & 0x80) >> 7;
int exponentValue = (value[0] & 0x7f);
double fraction1 = (value[1] << 16) + (value[2] << 8) + value[3];
double fraction2 = (value[4] << 24) + (value[5] << 16) + (value[6] << 8) + value[7];
double exponent24 = 16777216.0; // 2^24
double exponent56 = 72057594037927936.0; // 2^56
double mantissa1 = fraction1 / exponent24;
double mantissa2 = fraction2 / exponent56;
double mantissa = mantissa1 + mantissa2;
double exponent = Math.Pow(ibmBase, exponentValue - exponentBias);
if (signValue == 0)
sign = 1.0;
else
sign = -1.0;
return (sign * mantissa * exponent);
}
I have a task to complete in C#. I have a Subnet Mask: 255.255.128.0.
I need to find the number of bits in the Subnet Mask, which would be, in this case, 17.
However, I need to be able to do this in C# WITHOUT the use of the System.Net library (the system I am programming in does not have access to this library).
It seems like the process should be something like:
1) Split the Subnet Mask into Octets.
2) Convert the Octets to be binary.
3) Count the number of Ones in each Octet.
4) Output the total number of found Ones.
However, my C# is pretty poor. Does anyone have the C# knowledge to help?
Bit counting algorithm taken from:
http://www.necessaryandsufficient.net/2009/04/optimising-bit-counting-using-iterative-data-driven-development/
string mask = "255.255.128.0";
int totalBits = 0;
foreach (string octet in mask.Split('.'))
{
byte octetByte = byte.Parse(octet);
while (octetByte != 0)
{
totalBits += octetByte & 1; // logical AND on the LSB
octetByte >>= 1; // do a bitwise shift to the right to create a new LSB
}
}
Console.WriteLine(totalBits);
The most simple algorithm from the article was used. If performance is critical, you might want to read the article and use a more optimized solution from it.
string ip = "255.255.128.0";
string a = "";
ip.Split('.').ToList().ForEach(x => a += Convert.ToInt32(x, 2).ToString());
int ones_found = a.Replace("0", "").Length;
A complete sample:
public int CountBit(string mask)
{
int ones=0;
Array.ForEach(mask.Split('.'),(s)=>Array.ForEach(Convert.ToString(int.Parse(s),2).Where(c=>c=='1').ToArray(),(k)=>ones++));
return ones
}
You can convert a number to binary like this:
string ip = "255.255.128.0";
string[] tokens = ip.Split('.');
string result = "";
foreach (string token in tokens)
{
int tokenNum = int.Parse(token);
string octet = Convert.ToString(tokenNum, 2);
while (octet.Length < 8)
octet = octet + '0';
result += octet;
}
int mask = result.LastIndexOf('1') + 1;
The solution is to use a binary operation like
foreach(string octet in ipAddress.Split('.'))
{
int oct = int.Parse(octet);
while(oct !=0)
{
total += oct & 1; // {1}
oct >>=1; //{2}
}
}
The trick is that on line {1} the binary AND is in sence a multiplication so multiplicating 1x0=0, 1x1=1. So if we have some hypothetic number
0000101001 and multiply it by 1 (so in binary world we execute &), which is nothig else then 0000000001, we get
0000101001
0000000001
Most right digit is 1 in both numbers so making binary AND return 1, otherwise if ANY of the numbers minor digit will be 0, the result will be 0.
So here, on line total += oct & 1 we add to tolal either 1 or 0, based on that digi number.
On line {2}, instead we just shift the minor bit to right by, actually, deviding the number by 2, untill it becomes 0.
Easy.
EDIT
This is valid for intgere and for byte types, but do not use this technique on floating point numbers. By the way, it's pretty valuable solution for this question.
I've been wrestling with Project Euler Problem #16 in C# 2.0. The crux of the question is that you have to calculate and then iterate through each digit in a number that is 604 digits long (or there-abouts). You then add up these digits to produce the answer.
This presents a problem: C# 2.0 doesn't have a built-in datatype that can handle this sort of calculation precision. I could use a 3rd party library, but that would defeat the purpose of attempting to solve it programmatically without external libraries. I can solve it in Perl; but I'm trying to solve it in C# 2.0 (I'll attempt to use C# 3.0 in my next run-through of the Project Euler questions).
Question
What suggestions (not answers!) do you have for solving project Euler #16 in C# 2.0? What methods would work?
NB: If you decide to post an answer, please prefix your attempt with a blockquote that has ###Spoiler written before it.
A number of a series of digits. A 32 bit unsigned int is 32 binary digits. The string "12345" is a series of 5 digits. Digits can be stored in many ways: as bits, characters, array elements and so on. The largest "native" datatype in C# with complete precision is probably the decimal type (128 bits, 28-29 digits). Just choose your own method of storing digits that allows you to store much bigger numbers.
As for the rest, this will give you a clue:
21 = 2
22 = 21 + 21
23 = 22 + 22
Example:
The sum of digits of 2^100000 is 135178
Ran in 4875 ms
The sum of digits of 2^10000 is 13561
Ran in 51 ms
The sum of digits of 2^1000 is 1366
Ran in 2 ms
SPOILER ALERT: Algorithm and solution in C# follows.
Basically, as alluded to a number is nothing more than an array of digits. This can be represented easily in two ways:
As a string;
As an array of characters or digits.
As others have mentioned, storing the digits in reverse order is actually advisable. It makes the calculations much easier. I tried both of the above methods. I found strings and the character arithmetic irritating (it's easier in C/C++; the syntax is just plain annoying in C#).
The first thing to note is that you can do this with one array. You don't need to allocate more storage at each iteration. As mentioned you can find a power of 2 by doubling the previous power of 2. So you can find 21000 by doubling 1 one thousand times. The doubling can be done in place with the general algorithm:
carry = 0
foreach digit in array
sum = digit + digit + carry
if sum > 10 then
carry = 1
sum -= 10
else
carry = 0
end if
digit = sum
end foreach
This algorithm is basically the same for using a string or an array. At the end you just add up the digits. A naive implementation might add the results into a new array or string with each iteration. Bad idea. Really slows it down. As mentioned, it can be done in place.
But how large should the array be? Well that's easy too. Mathematically you can convert 2^a to 10^f(a) where f(a) is a simple logarithmic conversion and the number of digits you need is the next higher integer from that power of 10. For simplicity, you can just use:
digits required = ceil(power of 2 / 3)
which is a close approximation and sufficient.
Where you can really optimise this is by using larger digits. A 32 bit signed int can store a number between +/- 2 billion (approximately. Well 9 digits equals a billion so you can use a 32 bit int (signed or unsigned) as basically a base one billion "digit". You can work out how many ints you need, create that array and that's all the storage you need to run the entire algorithm (being 130ish bytes) with everything being done in place.
Solution follows (in fairly rough C#):
static void problem16a()
{
const int limit = 1000;
int ints = limit / 29;
int[] number = new int[ints + 1];
number[0] = 2;
for (int i = 2; i <= limit; i++)
{
doubleNumber(number);
}
String text = NumberToString(number);
Console.WriteLine(text);
Console.WriteLine("The sum of digits of 2^" + limit + " is " + sumDigits(text));
}
static void doubleNumber(int[] n)
{
int carry = 0;
for (int i = 0; i < n.Length; i++)
{
n[i] <<= 1;
n[i] += carry;
if (n[i] >= 1000000000)
{
carry = 1;
n[i] -= 1000000000;
}
else
{
carry = 0;
}
}
}
static String NumberToString(int[] n)
{
int i = n.Length;
while (i > 0 && n[--i] == 0)
;
String ret = "" + n[i--];
while (i >= 0)
{
ret += String.Format("{0:000000000}", n[i--]);
}
return ret;
}
I solved this one using C# also, much to my dismay when I discovered that Python can do this in one simple operation.
Your goal is to create an adding machine using arrays of int values.
Spoiler follows
I ended up using an array of int
values to simulate an adding machine,
but I represented the number backwards
- which you can do because the problem only asks for the sum of the digits,
this means order is irrelevant.
What you're essentially doing is
doubling the value 1000 times, so you
can double the value 1 stored in the
1st element of the array, and then
continue looping until your value is
over 10. This is where you will have
to keep track of a carry value. The
first power of 2 that is over 10 is
16, so the elements in the array after
the 5th iteration are 6 and 1.
Now when you loop through the array
starting at the 1st value (6), it
becomes 12 (so you keep the last
digit, and set a carry bit on the next
index of the array) - which when
that value is doubled you get 2 ... plus the 1 for the carry bit which
equals 3. Now you have 2 and 3 in your
array which represents 32.
Continues this process 1000 times and
you'll have an array with roughly 600
elements that you can easily add up.
I have solved this one before, and now I re-solved it using C# 3.0. :)
I just wrote a Multiply extension method that takes an IEnumerable<int> and a multiplier and returns an IEnumerable<int>. (Each int represents a digit, and the first one it the least significant digit.) Then I just created a list with the item { 1 } and multiplied it by 2 a 1000 times. Adding the items in the list is simple with the Sum extension method.
19 lines of code, which runs in 13 ms. on my laptop. :)
Pretend you are very young, with square paper. To me, that is like a list of numbers. Then to double it you double each number, then handle any "carries", by subtracting the 10s and adding 1 to the next index. So if the answer is 1366... something like (completely unoptimized, rot13):
hfvat Flfgrz;
hfvat Flfgrz.Pbyyrpgvbaf.Trarevp;
pynff Cebtenz {
fgngvp ibvq Pneel(Yvfg<vag> yvfg, vag vaqrk) {
juvyr (yvfg[vaqrk] > 9) {
yvfg[vaqrk] -= 10;
vs (vaqrk == yvfg.Pbhag - 1) yvfg.Nqq(1);
ryfr yvfg[vaqrk + 1]++;
}
}
fgngvp ibvq Znva() {
ine qvtvgf = arj Yvfg<vag> { 1 }; // 2^0
sbe (vag cbjre = 1; cbjre <= 1000; cbjre++) {
sbe (vag qvtvg = 0; qvtvg < qvtvgf.Pbhag; qvtvg++) {
qvtvgf[qvtvg] *= 2;
}
sbe (vag qvtvg = 0; qvtvg < qvtvgf.Pbhag; qvtvg++) {
Pneel(qvtvgf, qvtvg);
}
}
qvtvgf.Erirefr();
sbernpu (vag v va qvtvgf) {
Pbafbyr.Jevgr(v);
}
Pbafbyr.JevgrYvar();
vag fhz = 0;
sbernpu (vag v va qvtvgf) fhz += v;
Pbafbyr.Jevgr("fhz: ");
Pbafbyr.JevgrYvar(fhz);
}
}
If you wish to do the primary calculation in C#, you will need some sort of big integer implementation (Much like gmp for C/C++). Programming is about using the right tool for the right job. If you cannot find a good big integer library for C#, it's not against the rules to calculate the number in a language like Python which already has the ability to calculate large numbers. You could then put this number into your C# program via your method of choice, and iterate over each character in the number (you will have to store it as a string). For each character, convert it to an integer and add it to your total until you reach the end of the number. If you would like the big integer, I calculated it with python below. The answer is further down.
Partial Spoiler
10715086071862673209484250490600018105614048117055336074437503883703510511249361
22493198378815695858127594672917553146825187145285692314043598457757469857480393
45677748242309854210746050623711418779541821530464749835819412673987675591655439
46077062914571196477686542167660429831652624386837205668069376
Spoiler Below!
>>> val = str(2**1000)
>>> total = 0
>>> for i in range(0,len(val)): total += int(val[i])
>>> print total
1366
If you've got ruby, you can easily calculate "2**1000" and get it as a string. Should be an easy cut/paste into a string in C#.
Spoiler
In Ruby: (2**1000).to_s.split(//).inject(0){|x,y| x+y.to_i}
spoiler
If you want to see a solution check
out my other answer. This is in Java but it's very easy to port to C#
Here's a clue:
Represent each number with a list. That way you can do basic sums like:
[1,2,3,4,5,6]
+ [4,5]
_____________
[1,2,3,5,0,1]
One alternative to representing the digits as a sequence of integers is to represent the number base 2^32 as a list of 32 bit integers, which is what many big integer libraries do. You then have to convert the number to base 10 for output. This doesn't gain you very much for this particular problem - you can write 2^1000 straight away, then have to divide by 10 many times instead of multiplying 2 by itself 1000 times ( or, as 1000 is 0b1111101000. calculating the product of 2^8,32,64,128,256,512 using repeated squaring 2^8 = (((2^2)^2)^2))) which requires more space and a multiplication method, but is far fewer operations ) - is closer to normal big integer use, so you may find it more useful in later problems ( if you try to calculate the last ten digits of 28433×2^(7830457)+1 using the digit-per int method and repeated addition, it may take some time (though in that case you could use modulo arthimetic, rather than adding strings of millions of digits) ).
Working solution that I have posted it here as well: http://www.mycoding.net/2012/01/solution-to-project-euler-problem-16/
The code:
import java.math.BigInteger;
public class Euler16 {
public static void main(String[] args) {
int power = 1;
BigInteger expo = new BigInteger("2");
BigInteger num = new BigInteger("2");
while(power < 1000){
expo = expo.multiply(num);
power++;
}
System.out.println(expo); //Printing the value of 2^1000
int sum = 0;
char[] expoarr = expo.toString().toCharArray();
int max_count = expoarr.length;
int count = 0;
while(count<max_count){ //While loop to calculate the sum of digits
sum = sum + (expoarr[count]-48);
count++;
}
System.out.println(sum);
}
}
Euler problem #16 has been discussed many times here, but I could not find an answer that gives a good overview of possible solution approaches, the lay of the land as it were. Here's my attempt at rectifying that.
This overview is intended for people who have already found a solution and want to get a more complete picture. It is basically language-agnostic even though the sample code is C#. There are some usages of features that are not available in C# 2.0 but they are not essential - their purpose is only to get boring stuff out of the way with a minimum of fuss.
Apart from using a ready-made BigInteger library (which doesn't count), straightforward solutions for Euler #16 fall into two fundamental categories: performing calculations natively - i.e. in a base that is a power of two - and converting to decimal in order to get at the digits, or performing the computations directly in a decimal base so that the digits are available without any conversion.
For the latter there are two reasonably simple options:
repeated doubling
powering by repeated squaring
Native Computation + Radix Conversion
This approach is the simplest and its performance exceeds that of naive solutions using .Net's builtin BigInteger type.
The actual computation is trivially achieved: just perform the moral equivalent of 1 << 1000, by storing 1000 binary zeroes and appending a single lone binary 1.
The conversion is also quite simple and can be done by coding the pencil-and-paper division method, with a suitably large choice of 'digit' for efficiency. Variables for intermediate results need to be able to hold two 'digits'; dividing the number of decimal digits that fit in a long by 2 gives 9 decimal digits for the maximum meta-digit (or 'limb', as it is usually called in bignum lore).
class E16_RadixConversion
{
const int BITS_PER_WORD = sizeof(uint) * 8;
const uint RADIX = 1000000000; // == 10^9
public static int digit_sum_for_power_of_2 (int exponent)
{
var dec = new List<int>();
var bin = new uint[(exponent + BITS_PER_WORD) / BITS_PER_WORD];
int top = bin.Length - 1;
bin[top] = 1u << (exponent % BITS_PER_WORD);
while (top >= 0)
{
ulong rest = 0;
for (int i = top; i >= 0; --i)
{
ulong temp = (rest << BITS_PER_WORD) | bin[i];
ulong quot = temp / RADIX; // x64 uses MUL (sometimes), x86 calls a helper function
rest = temp - quot * RADIX;
bin[i] = (uint)quot;
}
dec.Add((int)rest);
if (bin[top] == 0)
--top;
}
return E16_Common.digit_sum(dec);
}
}
I wrote (rest << BITS_PER_WORD) | big[i] instead of using operator + because that is precisely what is needed here; no 64-bit addition with carry propagation needs to take place. This means that the two operands could be written directly to their separate registers in a register pair, or to fields in an equivalent struct like LARGE_INTEGER.
On 32-bit systems the 64-bit division cannot be inlined as a few CPU instructions, because the compiler cannot know that the algorithm guarantees quotient and remainder to fit into 32-bit registers. Hence the compiler calls a helper function that can handle all eventualities.
These systems may profit from using a smaller limb, i.e. RADIX = 10000 and uint instead of ulong for holding intermediate (double-limb) results. An alternative for languages like C/C++ would be to call a suitable compiler intrinsic that wraps the raw 32-bit by 32-bit to 64-bit multiply (assuming that division by the constant radix is to be implemented by multiplication with the inverse). Conversely, on 64-bit systems the limb size can be increased to 19 digits if the compiler offers a suitable 64-by-64-to-128 bit multiply primitive or allows inline assembler.
Decimal Doubling
Repeated doubling seems to be everyone's favourite, so let's do that next. Variables for intermediate results need to hold one 'digit' plus one carry bit, which gives 18 digits per limb for long. Going to ulong cannot improve things (there's 0.04 bit missing to 19 digits plus carry), and so we might as well stick with long.
On a binary computer, decimal limbs do not coincide with computer word boundaries. That makes it necessary to perform a modulo operation on the limbs during each step of the calculation. Here, this modulo op can be reduced to a subtraction of the modulus in the event of carry, which is faster than performing a division. The branching in the inner loop can be eliminated by bit twiddling but that would be needlessly obscure for a demonstration of the basic algorithm.
class E16_DecimalDoubling
{
const int DIGITS_PER_LIMB = 18; // == floor(log10(2) * (63 - 1)), b/o carry
const long LIMB_MODULUS = 1000000000000000000L; // == 10^18
public static int digit_sum_for_power_of_2 (int power_of_2)
{
Trace.Assert(power_of_2 > 0);
int total_digits = (int)Math.Ceiling(Math.Log10(2) * power_of_2);
int total_limbs = (total_digits + DIGITS_PER_LIMB - 1) / DIGITS_PER_LIMB;
var a = new long[total_limbs];
int limbs = 1;
a[0] = 2;
for (int i = 1; i < power_of_2; ++i)
{
int carry = 0;
for (int j = 0; j < limbs; ++j)
{
long new_limb = (a[j] << 1) | carry;
carry = 0;
if (new_limb >= LIMB_MODULUS)
{
new_limb -= LIMB_MODULUS;
carry = 1;
}
a[j] = new_limb;
}
if (carry != 0)
{
a[limbs++] = carry;
}
}
return E16_Common.digit_sum(a);
}
}
This is just as simple as radix conversion, but except for very small exponents it does not perform anywhere near as well (despite its huge meta-digits of 18 decimal places). The reason is that the code must perform (exponent - 1) doublings, and the work done in each pass corresponds to about half the total number of digits (limbs).
Repeated Squaring
The idea behind powering by repeated squaring is to replace a large number of doublings with a small number of multiplications.
1000 = 2^3 + 2^5 + 2^6 + 2^7 + 2^8 + 2^9
x^1000 = x^(2^3 + 2^5 + 2^6 + 2^7 + 2^8 + 2^9)
x^1000 = x^2^3 * x^2^5 * x^2^6 * x^2^7 * x^2*8 * x^2^9
x^2^3 can be obtained by squaring x three times, x^2^5 by squaring five times, and so on. On a binary computer the decomposition of the exponent into powers of two is readily available because it is the bit pattern representing that number. However, even non-binary computers should be able to test whether a number is odd or even, or to divide a number by two.
The multiplication can be done by coding the pencil-and-paper method; here I'm using a helper function that computes one row of a product and adds it into the result at a suitably shifted position, so that the rows of partial products do not need to be stored for a separate addition step later. Intermediate values during computation can be up to two 'digits' in size, so that the limbs can be only half as wide as for repeated doubling (where only one extra bit had to fit in addition to a 'digit').
Note: the radix of the computations is not a power of 2, and so the squarings of 2 cannot be computed by simple shifting here. On the positive side, the code can be used for computing powers of bases other than 2.
class E16_DecimalSquaring
{
const int DIGITS_PER_LIMB = 9; // language limit 18, half needed for holding the carry
const int LIMB_MODULUS = 1000000000;
public static int digit_sum_for_power_of_2 (int e)
{
Trace.Assert(e > 0);
int total_digits = (int)Math.Ceiling(Math.Log10(2) * e);
int total_limbs = (total_digits + DIGITS_PER_LIMB - 1) / DIGITS_PER_LIMB;
var squared_power = new List<int>(total_limbs) { 2 };
var result = new List<int>(total_limbs);
result.Add((e & 1) == 0 ? 1 : 2);
while ((e >>= 1) != 0)
{
squared_power = multiply(squared_power, squared_power);
if ((e & 1) == 1)
result = multiply(result, squared_power);
}
return E16_Common.digit_sum(result);
}
static List<int> multiply (List<int> lhs, List<int> rhs)
{
var result = new List<int>(lhs.Count + rhs.Count);
resize_to_capacity(result);
for (int i = 0; i < rhs.Count; ++i)
addmul_1(result, i, lhs, rhs[i]);
trim_leading_zero_limbs(result);
return result;
}
static void addmul_1 (List<int> result, int offset, List<int> multiplicand, int multiplier)
{
// it is assumed that the caller has sized `result` appropriately before calling this primitive
Trace.Assert(result.Count >= offset + multiplicand.Count + 1);
long carry = 0;
foreach (long limb in multiplicand)
{
long temp = result[offset] + limb * multiplier + carry;
carry = temp / LIMB_MODULUS;
result[offset++] = (int)(temp - carry * LIMB_MODULUS);
}
while (carry != 0)
{
long final_temp = result[offset] + carry;
carry = final_temp / LIMB_MODULUS;
result[offset++] = (int)(final_temp - carry * LIMB_MODULUS);
}
}
static void resize_to_capacity (List<int> operand)
{
operand.AddRange(Enumerable.Repeat(0, operand.Capacity - operand.Count));
}
static void trim_leading_zero_limbs (List<int> operand)
{
int i = operand.Count;
while (i > 1 && operand[i - 1] == 0)
--i;
operand.RemoveRange(i, operand.Count - i);
}
}
The efficiency of this approach is roughly on par with radix conversion but there are specific improvements that apply here. Efficiency of the squaring can be doubled by writing a special squaring routine that utilises the fact that ai*bj == aj*bi if a == b, which cuts the number of multiplications in half.
Also, there are methods for computing addition chains that involve fewer operations overall than using the exponent bits for determining the squaring/multiplication schedule.
Helper Code and Benchmarks
The helper code for summing decimal digits in the meta-digits (decimal limbs) produced by the sample code is trivial, but I'm posting it here anyway for your convenience:
internal class E16_Common
{
internal static int digit_sum (int limb)
{
int sum = 0;
for ( ; limb > 0; limb /= 10)
sum += limb % 10;
return sum;
}
internal static int digit_sum (long limb)
{
const int M1E9 = 1000000000;
return digit_sum((int)(limb / M1E9)) + digit_sum((int)(limb % M1E9));
}
internal static int digit_sum (IEnumerable<int> limbs)
{
return limbs.Aggregate(0, (sum, limb) => sum + digit_sum(limb));
}
internal static int digit_sum (IEnumerable<long> limbs)
{
return limbs.Select((limb) => digit_sum(limb)).Sum();
}
}
This can be made more efficient in various ways but overall it is not critical.
All three solutions take O(n^2) time where n is the exponent. In other words, they will take a hundred times as long when the exponent grows by a factor of ten. Radix conversion and repeated squaring can both be improved to roughly O(n log n) by employing divide-and-conquer strategies; I doubt whether the doubling scheme can be improved in a similar fastion but then it was never competitive to begin with.
All three solutions presented here can be used to print the actual results, by stringifying the meta-digits with suitable padding and concatenating them. I've coded the functions as returning the digit sum instead of the arrays/lists with decimal limbs only in order to keep the sample code simple and to ensure that all functions have the same signature, for benchmarking.
In these benchmarks, the .Net BigInteger type was wrapped like this:
static int digit_sum_via_BigInteger (int power_of_2)
{
return System.Numerics.BigInteger.Pow(2, power_of_2)
.ToString()
.ToCharArray()
.Select((c) => (int)c - '0')
.Sum();
}
Finally, the benchmarks for the C# code:
# testing decimal doubling ...
1000: 1366 in 0,052 ms
10000: 13561 in 3,485 ms
100000: 135178 in 339,530 ms
1000000: 1351546 in 33.505,348 ms
# testing decimal squaring ...
1000: 1366 in 0,023 ms
10000: 13561 in 0,299 ms
100000: 135178 in 24,610 ms
1000000: 1351546 in 2.612,480 ms
# testing radix conversion ...
1000: 1366 in 0,018 ms
10000: 13561 in 0,619 ms
100000: 135178 in 60,618 ms
1000000: 1351546 in 5.944,242 ms
# testing BigInteger + LINQ ...
1000: 1366 in 0,021 ms
10000: 13561 in 0,737 ms
100000: 135178 in 69,331 ms
1000000: 1351546 in 6.723,880 ms
As you can see, the radix conversion is almost as slow as the solution using the builtin BigInteger class. The reason is that the runtime is of the newer type that does performs certain standard optimisations only for signed integer types but not for unsigned ones (here: implementing division by a constant as multiplication with the inverse).
I haven't found an easy means of inspecting the native code for existing .Net assemblies, so I decided on a different path of investigation: I coded a variant of E16_RadixConversion for comparison where ulong and uint were replaced by long and int respectively, and BITS_PER_WORD decreased by 1 accordingly. Here are the timings:
# testing radix conv Int63 ...
1000: 1366 in 0,004 ms
10000: 13561 in 0,202 ms
100000: 135178 in 18,414 ms
1000000: 1351546 in 1.834,305 ms
More than three times as fast as the version that uses unsigned types! Clear evidence of numbskullery in the compiler...
In order to showcase the effect of different limb sizes I templated the solutions in C++ on the unsigned integer types used as limbs. The timings are prefixed with the byte size of a limb and the number of decimal digits in a limb, separated by a colon. There is no timing for the often-seen case of manipulating digit characters in strings, but it is safe to say that such code will take at least twice as long as the code that uses double digits in byte-sized limbs.
# E16_DecimalDoubling
[1:02] e = 1000 -> 1366 0.308 ms
[2:04] e = 1000 -> 1366 0.152 ms
[4:09] e = 1000 -> 1366 0.070 ms
[8:18] e = 1000 -> 1366 0.071 ms
[1:02] e = 10000 -> 13561 30.533 ms
[2:04] e = 10000 -> 13561 13.791 ms
[4:09] e = 10000 -> 13561 6.436 ms
[8:18] e = 10000 -> 13561 2.996 ms
[1:02] e = 100000 -> 135178 2719.600 ms
[2:04] e = 100000 -> 135178 1340.050 ms
[4:09] e = 100000 -> 135178 588.878 ms
[8:18] e = 100000 -> 135178 290.721 ms
[8:18] e = 1000000 -> 1351546 28823.330 ms
For the exponent of 10^6 there is only the timing with 64-bit limbs, since I didn't have the patience to wait many minutes for full results. The picture is similar for radix conversion, except that there is no row for 64-bit limbs because my compiler does not have a native 128-bit integral type.
# E16_RadixConversion
[1:02] e = 1000 -> 1366 0.080 ms
[2:04] e = 1000 -> 1366 0.026 ms
[4:09] e = 1000 -> 1366 0.048 ms
[1:02] e = 10000 -> 13561 4.537 ms
[2:04] e = 10000 -> 13561 0.746 ms
[4:09] e = 10000 -> 13561 0.243 ms
[1:02] e = 100000 -> 135178 445.092 ms
[2:04] e = 100000 -> 135178 68.600 ms
[4:09] e = 100000 -> 135178 19.344 ms
[4:09] e = 1000000 -> 1351546 1925.564 ms
The interesting thing is that simply compiling the code as C++ doesn't make it any faster - i.e., the optimiser couldn't find any low-hanging fruit that the C# jitter missed, apart from not toeing the line with regard to penalising unsigned integers. That's the reason why I like prototyping in C# - performance in the same ballpark as (unoptimised) C++ and none of the hassle.
Here's the meat of the C++ version (sans reams of boring stuff like helper templates and so on) so that you can see that I didn't cheat to make C# look better:
template<typename W>
struct E16_RadixConversion
{
typedef W limb_t;
typedef typename detail::E16_traits<W>::long_t long_t;
static unsigned const BITS_PER_WORD = sizeof(limb_t) * CHAR_BIT;
static unsigned const RADIX_DIGITS = std::numeric_limits<limb_t>::digits10;
static limb_t const RADIX = detail::pow10_t<limb_t, RADIX_DIGITS>::RESULT;
static unsigned digit_sum_for_power_of_2 (unsigned e)
{
std::vector<limb_t> digits;
compute_digits_for_power_of_2(e, digits);
return digit_sum(digits);
}
static void compute_digits_for_power_of_2 (unsigned e, std::vector<limb_t> &result)
{
assert(e > 0);
unsigned total_digits = unsigned(std::ceil(std::log10(2) * e));
unsigned total_limbs = (total_digits + RADIX_DIGITS - 1) / RADIX_DIGITS;
result.resize(0);
result.reserve(total_limbs);
std::vector<limb_t> bin((e + BITS_PER_WORD) / BITS_PER_WORD);
bin.back() = limb_t(limb_t(1) << (e % BITS_PER_WORD));
while (!bin.empty())
{
long_t rest = 0;
for (std::size_t i = bin.size(); i-- > 0; )
{
long_t temp = (rest << BITS_PER_WORD) | bin[i];
long_t quot = temp / RADIX;
rest = temp - quot * RADIX;
bin[i] = limb_t(quot);
}
result.push_back(limb_t(rest));
if (bin.back() == 0)
bin.pop_back();
}
}
};
Conclusion
These benchmarks also show that this Euler task - like many others - seems designed to be solved on a ZX81 or an Apple ][, not on our modern toys that are a million times as powerful. There's no challenge involved here unless the limits are increased drastically (an exponent of 10^5 or 10^6 would be much more adequate).
A good overview of the practical state of the art can be got from GMP's overview of algorithms. Another excellent overview of the algorithms is chapter 1 of "Modern Computer Arithmetic" by Richard Brent and Paul Zimmermann. It contains exactly what one needs to know for coding challenges and competitions, but unfortunately the depth is not equal to that of Donald Knuth's treatment in "The Art of Computer Programming".
The radix conversion solution adds a useful technique to one's code challenge toolchest, since the given code can be trivially extended for converting any old big integer instead of only the bit pattern 1 << exponent. The repeated squaring solutiono can be similarly useful since changing the sample code to power something other than 2 is again trivial.
The approach of performing computations directly in powers of 10 can be useful for challenges where decimal results are required, because performance is in the same ballpark as native computation but there is no need for a separate conversion step (which can require similar amounts of time as the actual computation).