Why use decimal(int [ ]) constructor? - c#

I am maintaining a C# desktop application, on windows 7, using Visual Studio 2013. And somewhere in the code there is the following line, that tries to create a 0.01 decimal value, using a Decimal(Int32[]) constructor:
decimal d = new decimal(new int[] { 1, 0, 0, 131072 });
First question is, is it different from the following?
decimal d = 0.01M;
If it is not different, why the developer has gone through the trouble of coding like that?
I need to change this line in order to create dynamic values. Something like:
decimal d = (decimal) (1 / Math.Pow(10, digitNumber));
Am I going to cause some unwanted behavior this way?

It seems useful to me when the source of the decimal consists of bits.
The decimal used in .NET has an implementation that is based on a sequence of bit parameters (not just one stream of bits like with an int), so it can be useful to construct a decimal with bits when you communicate with other systems which return a decimal through a blob of bytes (a socket, from a piece of memory, etc).
It is easy now to convert the set of bits to a decimal now. No need for fancy conversion code. Also, you can construct a decimal from the inputs defined in the standard, which makes it convenient for testing the .NET framework too.

The decimal(int[] bits) constructor allows you to give a bitwise definition of the decimal you're creating bits must be a 4 int array where:
bits 0, 1, and 2 make up the 96-bit integer number.
bits 3 contains the scale factor and sign
It just allows you to get really precise with the definition of the decimal judging from your example I don't think you need that level of precision.
See here for more detail on using that constructor or here for other constructors that may be more appropriate for you
To more specifically answer your question if digitNumberis a 16bit exponent then decimal d = new decimal(new int[] { 1, 0, 0, digitNumber << 16 }); does what you want since the exponent goes in bits 16 - 23 of last int in the array

The definition in the xml is
//
// Summary:
// Initializes a new instance of System.Decimal to a decimal value represented
// in binary and contained in a specified array.
//
// Parameters:
// bits:
// An array of 32-bit signed integers containing a representation of a decimal
// value.
//
// Exceptions:
// System.ArgumentNullException:
// bits is null.
//
// System.ArgumentException:
// The length of the bits is not 4.-or- The representation of the decimal value
// in bits is not valid.
So for some unknown reason the original developer wanted to initialize his decimal this way. Maybe he was just wanted to confuse someone in the future.
It cant possibly affect your code if you change this to
decimal d = 0.01m;
because
(new decimal(new int[] { 1, 0, 0, 131072})) == 0.01m

You should exactly know how decimal stored in memory.
you can use this method to generate the desired value
public static decimal Base10FractionGenerator(int digits)
{
if (digits < 0 || digits > 28)
throw new ArgumentException($"'{nameof(digits)}' must be between 0 and 28");
return new decimal(new[] { 1, 0, 0, digits << 16 });
}
Use it like
Console.WriteLine(Base10FractionGenerator(0));
Console.WriteLine(Base10FractionGenerator(2));
Console.WriteLine(Base10FractionGenerator(5));
Here is the result
1
0.01
0.00001

The particular constructor you're talking about generates a decimal from four 32-bit values. Unfortunately, newer versions of the Common Language Infrastructure (CLI) leave its exact format unspecified (presumably to allow implementations to support different decimal formats) and now merely guarantee at least a specific precision and range of decimal numbers. However, earlier versions of the CLI do define that format exactly as Microsoft's implementation does, so it's probably kept that way in Microsoft's implementation for backward compatibility. However, it's not ruled out that other implementations of the CLI will interpret the four 32-bit values of the Decimal constructor differently.

Decimals are exact numerics, you can use == or != to test for equality.
Perhaps, this line of code comes from some other place where it made sense at some particular point of time.
I'd clean it up.

Related

Calculators working with larger numbers than 18446744073709551615

When I Initialize a ulong with the value 18446744073709551615 and then add a 1 to It and display to the Console It displays a 0 which is totally expected.
I know this question sounds stupid but I have to ask It. if my Computer has a 64-bit architecture CPU how is my calculator able to work with larger numbers than 18446744073709551615?
I suppose floating-point has a lot to do here.
I would like to know exactly how this happens.
Thank you.
working with larger numbers than 18446744073709551615
"if my Computer has a 64-bit architecture CPU" --> The architecture bit size is largely irrelevant.
Consider how you are able to add 2 decimal digits whose sum is more than 9. There is a carry generated and then used when adding the next most significant decimal place.
The CPU can do the same but with base 18446744073709551616 instead of base 10. It uses a carry bit as well as a sign and overflow bit to perform extended math.
I suppose floating-point has a lot to do here.
This is nothing to do with floating point.
; you say you're using ulong, which means your using unsigned 64-but arithmetic. The largest value you can store is therefore "all ones", for 64 bits - aka UInt64.MaxValue, which as you've discovered: https://learn.microsoft.com/en-us/dotnet/api/system.uint64.maxvalue
If you want to store arbitrarily large numbers: there are APIs for that - for example BigInteger. However, arbitrary size cones at a cost, so it isn't the default, and certainly isn't what you get when you use ulong (or double, or decimal, etc - all the compiler-level numeric types have fixed size).
So: consider using BigInteger
You either way have a 64 bits architecture processor and limited to doing 64 bits math - your problem is a bit hard to explain without taking an explicit example of how this is solved with BigInteger in System.Numerics namespace, available in .NET Framework 4.8 for example. The basis is to 'decompose' the number into an array representation.
mathematical expression 'decompose' here meaning :
"express (a number or function) as a combination of simpler components."
Internally BigInteger uses an internal array (actually multiple internal constructs) and a helper class called BigIntegerBuilder. In can implicitly convert an UInt64 integer without problem, for even bigger numbers you can use the + operator for example.
BigInteger bignum = new BigInteger(18446744073709551615);
bignum += 1;
You can read about the implicit operator here:
https://referencesource.microsoft.com/#System.Numerics/System/Numerics/BigInteger.cs
public static BigInteger operator +(BigInteger left, BigInteger right)
{
left.AssertValid();
right.AssertValid();
if (right.IsZero) return left;
if (left.IsZero) return right;
int sign1 = +1;
int sign2 = +1;
BigIntegerBuilder reg1 = new BigIntegerBuilder(left, ref sign1);
BigIntegerBuilder reg2 = new BigIntegerBuilder(right, ref sign2);
if (sign1 == sign2)
reg1.Add(ref reg2);
else
reg1.Sub(ref sign1, ref reg2);
return reg1.GetInteger(sign1);
}
In the code above from ReferenceSource you can see that we use the BigIntegerBuilder to add the left and right parts, which are also BigInteger constructs.
Interesting, it seems to keep its internal structure into an private array called "_bits", so that is the answer to your question. BigInteger keeps track of an array of 32-bits valued integer array and is therefore able to handle big integers, even beyond 64 bits.
You can drop this code into a console application or Linqpad (which has the .Dump() method I use here) and inspect :
BigInteger bignum = new BigInteger(18446744073709551615);
bignum.GetType().GetField("_bits",
BindingFlags.NonPublic | BindingFlags.Instance).GetValue(bignum).Dump();
A detail about BigInteger is revealed in a comment in the source code of BigInteger on Reference Source. So for integer values, BigInteger stores the value in the _sign field, for other values the field _bits is used.
Obviously, the internal array needs to be able to be converted into a representation in the decimal system (base-10) so humans can read it, the ToString() method converts the BigInteger to a string representation.
For a better in-depth understanding here, consider doing .NET source stepping to step way into the code how you carry out the mathematics here. But for a basic understanding, the BigInteger uses an internal representation of which is composed with 32 bits array which is transformed into a readable format which allows bigger numbers, bigger than even Int64.
// For values int.MinValue < n <= int.MaxValue, the value is stored in sign
// and _bits is null. For all other values, sign is +1 or -1 and the bits are in _bits

Inverting all 32 bits in a number using binary operators in c#

I am wondering how you take a number (for example 9), convert it to a 32 int (00000000000000000000000000001001), then invert or flip every bit (11111111111111111111111111110110) so that the zeroes become ones and the ones become zeroes.
I know how to do that by replacing the numbers in a string, but I need to know how to do that with binary operators on a binary number.
I think you have to use this operator, "~", but it just gives me a negative number when I use it on a value.
That is doing the correct functionality. The int data type within C# uses signed integers, so 11111111111111111111111111110110 is in fact a negative number.
As Marc pointed out, if you want to use unsigned values declare your number as a uint.
If you look at the decimal version of your number then its a negative number.
If you declare it as a unsigned int then its a positive one.
But this doesnt matter, binary it will always be 11111111111111111111111111110110.
Try this:
int number = 9;
Console.WriteLine(Convert.ToString(number, 2)); //Gives you 1001
number = ~number; //Invert all bits
Console.WriteLine(Convert.ToString(number, 2));
//Gives you your wanted result: 11111111111111111111111111110110

.net decimal - remove scale, solution that is guaranteed to work

I want to convert a decimal a with scale > 0 to its equivalent decimal b with scale 0 (suppose that there is an equivalent decimal without losing precision). Success is defined by having b.ToString() return a string without any trailing zeroes or by extracting the scale via GetBits and confirming that it is 0.
Easy options I found:
Decimal scale2 = new Decimal(100, 0, 0, false, 2);
string scale2AsString = scale2.ToString(System.Globalization.CultureInfo.InvariantCulture);
// toString includes trailing zeroes
Assert.IsTrue(scale2AsString.Equals("1.00"));
// can use format specifier to specify format
string scale2Formatted = scale2.ToString("G0");
Assert.IsTrue(scale2Formatted.Equals("1"));
// but what if we want to pass the decimal to third party code that does not use format specifiers?
// option 1, use Decimal.Truncate or Math.Truncate (Math.Truncate calls Decimal.Truncate, I believe)
Decimal truncated = Decimal.Truncate(scale2);
string truncatedAsString = truncated.ToString(System.Globalization.CultureInfo.InvariantCulture);
Assert.IsTrue(truncatedAsString.Equals("1"));
// option 2, division trick
Decimal divided = scale2 / 1.000000000000000000000000000000000m;
string dividedAsString = divided.ToString(System.Globalization.CultureInfo.InvariantCulture);
Assert.IsTrue(dividedAsString.Equals("1"));
// option 3, if we expect the decimal to fit in int64, convert to int64 and back
Int64 asInt64 = Decimal.ToInt64(scale2);
Decimal backToDecimal = new Decimal(asInt64);
string backToDecimalString = backToDecimal.ToString(System.Globalization.CultureInfo.InvariantCulture);
Assert.IsTrue(backToDecimalString.Equals("1"));
// option 4, convert to BigInteger then back using BigInteger's explicit conversion to decimal
BigInteger convertedToBigInteger = new BigInteger(scale2);
Decimal bigIntegerBackToDecimal = (Decimal)convertedToBigInteger;
string bigIntegerBackToDecimalString = bigIntegerBackToDecimal.ToString(System.Globalization.CultureInfo.InvariantCulture);
Assert.IsTrue(bigIntegerBackToDecimalString.Equals("1"));
So plenty of options, and certainly there are more. But which of these options are actually guaranteed to work?
Option 1: MSDN does not mention that the scale is changed when calling Truncate, so using this method seems to be relying on an implementation detail. Internally, Truncate calls FCallTruncate, for which I did not find any documentation.
Option 2 may be mandated by the CLI spec, but I did not find which exact specification that would be, I did not find it in the ECMA specification.
Option 3 (ToInt64 also uses FCallTruncate internally) will work judging by the reference source (the constructor taking an ulong sets flags and this scale to 0) but the documentation again makes no mention of scale.
Option 4, BigInteger calls Decimal.Truncate, with the comment:
// First truncate to get scale to 0 and extract bits
int[] bits = Decimal.GetBits(Decimal.Truncate(value));
So clearly Microsoft internally also thinks that Decimal.Truncate will set the scale to 0.
But I am looking for a method that is guaranteed to work without relying on implementation details and works for all the decimal where this can technically work (cannot rescale a Decimal.MaxValue for example). None of the options above seems to fit the bill for this requirement.
You pretty much answered your question yourself. Personally, I would not obsess so much about which method to use. If you method works now - even if it is undocumented - then it will most likely work in the future. And if a future update to .NET breaks your method then hopefully you have a test that will highlight this when you upgrade your application framework.
Going through your options:
1) Decimal.Truncate sets the scale to 0 but if it is undocumented then you may decide to not rely on this fact.
2) Dividing by 1.0000 ... may give you the desired result but it is not obvious what is going and if it is not documented then this is probably the worst option.
3 and 4) These options should work for you. You convert the decimal to an integer and then back to a decimal. Obviously, the decimal created from an integer has scale 0. Any other value would be wrong even though it is not explicitly documented. Option 4) is able to handle even Decimal.MaxValue and Decimal.MinValue.

The modulo operator (%) gives a different result for different .NET versions in C#

I am encrypting the user's input for generating a string for password. But a line of code gives different results in different versions of the framework. Partial code with value of key pressed by user:
Key pressed: 1. Variable ascii is 49. Value of 'e' and 'n' after some calculation:
e = 103,
n = 143,
Math.Pow(ascii, e) % n
Result of above code:
In .NET 3.5 (C#)
Math.Pow(ascii, e) % n
gives 9.0.
In .NET 4 (C#)
Math.Pow(ascii, e) % n
gives 77.0.
Math.Pow() gives the correct (same) result in both versions.
What is the cause, and is there a solution?
Math.Pow works on double-precision floating-point numbers; thus, you shouldn't expect more than the first 15–17 digits of the result to be accurate:
All floating-point numbers also have a limited number of significant digits, which also determines how accurately a floating-point value approximates a real number. A Double value has up to 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.
However, modulo arithmetic requires all digits to be accurate. In your case, you are computing 49103, whose result consists of 175 digits, making the modulo operation meaningless in both your answers.
To work out the correct value, you should use arbitrary-precision arithmetic, as provided by the BigInteger class (introduced in .NET 4.0).
int val = (int)(BigInteger.Pow(49, 103) % 143); // gives 114
Edit: As pointed out by Mark Peters in the comments below, you should use the BigInteger.ModPow method, which is intended specifically for this kind of operation:
int val = (int)BigInteger.ModPow(49, 103, 143); // gives 114
Apart from the fact that your hashing function is not a very good one *, the biggest problem with your code is not that it returns a different number depending on the version of .NET, but that in both cases it returns an entirely meaningless number: the correct answer to the problem is
49103 mod 143 = is 114. (link to Wolfram Alpha)
You can use this code to compute this answer:
private static int PowMod(int a, int b, int mod) {
if (b == 0) {
return 1;
}
var tmp = PowMod(a, b/2, mod);
tmp *= tmp;
if (b%2 != 0) {
tmp *= a;
}
return tmp%mod;
}
The reason why your computation produces a different result is that in order to produce an answer, you use an intermediate value that drops most of the significant digits of the 49103 number: only the first 16 of its 175 digits are correct!
1230824813134842807283798520430636310264067713738977819859474030746648511411697029659004340261471771152928833391663821316264359104254030819694748088798262075483562075061997649
The remaining 159 digits are all wrong. The mod operation, however, seeks a result that requires every single digit to be correct, including the very last ones. Therefore, even the tiniest improvement to the precision of Math.Pow that may have been implemented in .NET 4, would result in a drastic difference of your calculation, which essentially produces an arbitrary result.
* Since this question talks about raising integers to high powers in the context of password hashing, it may be a very good idea to read this answerlink before deciding if your current approach should be changed for a potentially better one.
What you see is rounding error in double. Math.Pow works with double and the difference is as below:
.NET 2.0 and 3.5 => var powerResult = Math.Pow(ascii, e); returns:
1.2308248131348429E+174
.NET 4.0 and 4.5 => var powerResult = Math.Pow(ascii, e); returns:
1.2308248131348427E+174
Notice the last digit before E and that is causing the difference in the result. It's not the modulus operator (%).
Floating-point precision can vary from machine to machine, and even on the same machine.
However, the .NET make a virtual machine for your apps... but there are changes from version to version.
Therefore you shouldn't rely on it to produce consistent results. For encryption, use the classes that the Framework provides rather than rolling your own.
There are a lot of answers about the way the code is bad. However, as to why the result is different…
Intel's FPUs use the 80-bit format internally to get more precision for intermediate results. So if a value is in the processor register it gets 80 bits, but when it is written to the stack it gets stored at 64 bits.
I expect that the newer version of .NET has a better optimizer in its Just in Time (JIT) compilation, so it is keeping a value in a register rather than writing it to the stack and then reading it back from the stack.
It may be that the JIT can now return a value in a register rather than on the stack. Or pass the value to the MOD function in a register.
See also Stack Overflow question What are the applications/benefits of an 80-bit extended precision data type?
Other processors, e.g. the ARM will give different results for this code.
Maybe it's best to calculate it yourself using only integer arithmetic. Something like:
int n = 143;
int e = 103;
int result = 1;
int ascii = (int) 'a';
for (i = 0; i < e; ++i)
result = result * ascii % n;
You can compare the performance with the performance of the BigInteger solution posted in the other answers.

convert double value to binary value

How can i convert double value to binary value.
i have some value like this below 125252525235558554452221545332224587265 i want to convert this to binary format..so i am keeping it in double and then trying to convert to binary (1 & 0's).. i am using C#.net
Well, you haven't specified a platform or what sort of binary value you're interested in, but in .NET there's BitConverter.DoubleToInt64Bits which lets you get at the IEEE 754 bits making up the value very easily.
In Java there's Double.doubleToLongBits which does the same thing.
Note that if you have a value such as "125252525235558554452221545332224587265" then you've got more information than a double can store accurately in the first place.
In C, you can do it for instance this way, which is a classic use of the union construct:
int i;
union {
double x;
unsigned char byte[sizeof (double)];
} converter;
converter.x = 5.5555555555556e18;
for(i = 0; i < sizeof converter.byte; i++)
printf("%02x", converter.byte[i]);
If you stick this in a main() and run it, it might print something like this:
~/src> gcc -o floatbits floatbits.c
~/src> ./floatbits
ba b5 f6 15 53 46 d3 43
Note though that this, of course, is platform-dependent in its endianness. The above is from a Linux system running on a Sempron CPU, i.e. it's little endian.
A decade late but hopefully this will help someone:
// Converts a double value to a string in base 2 for display.
// Example: 123.5 --> "0:10000000101:1110111000000000000000000000000000000000000000000000"
// Created by Ryan S. White in 2020, Released under the MIT license.
string DoubleToBinaryString(double val)
{
long v = BitConverter.DoubleToInt64Bits(val);
string binary = Convert.ToString(v, 2);
return binary.PadLeft(64, '0').Insert(12, ":").Insert(1, ":");
}
If you mean you want to do it yourself, then this is not a programming question.
If you want to make a computer do it, the easiest way is to use a floating point input routine and then display the result in its hex form. In C++:
double f = atof ("5.5555555555556E18");
unsigned char *b = (unsigned char *) &f;
for (int j = 0; j < 8; ++j)
printf (" %02x", b [j]);
A double value already IS a binary value. It is just a matter of the representation that you wish it to have. In a programming language when you call it a double, then the language that you use will interpret it in one way. If you happen to call the same chunk of memory an int, then it is not the same number.
So it depends what you really want... If you need to write it to disk or to network, then you need to think about BigEndian/LittleEndian.
For these huge numbers (who cannot be presented accurately using a double) you need to use some specialized class to hold the information needed.
C# provides the Decimal class:
The Decimal value type represents decimal numbers ranging from positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors. The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding. For example, the following code produces a result of 0.9999999999999999999999999999 rather than 1.
If you need bigger precision than this, you need to make your own class I guess. There is one here for ints: http://sourceforge.net/projects/cpp-bigint/ although it seems to be for c++.

Categories

Resources