I want to convert a decimal a with scale > 0 to its equivalent decimal b with scale 0 (suppose that there is an equivalent decimal without losing precision). Success is defined by having b.ToString() return a string without any trailing zeroes or by extracting the scale via GetBits and confirming that it is 0.
Easy options I found:
Decimal scale2 = new Decimal(100, 0, 0, false, 2);
string scale2AsString = scale2.ToString(System.Globalization.CultureInfo.InvariantCulture);
// toString includes trailing zeroes
Assert.IsTrue(scale2AsString.Equals("1.00"));
// can use format specifier to specify format
string scale2Formatted = scale2.ToString("G0");
Assert.IsTrue(scale2Formatted.Equals("1"));
// but what if we want to pass the decimal to third party code that does not use format specifiers?
// option 1, use Decimal.Truncate or Math.Truncate (Math.Truncate calls Decimal.Truncate, I believe)
Decimal truncated = Decimal.Truncate(scale2);
string truncatedAsString = truncated.ToString(System.Globalization.CultureInfo.InvariantCulture);
Assert.IsTrue(truncatedAsString.Equals("1"));
// option 2, division trick
Decimal divided = scale2 / 1.000000000000000000000000000000000m;
string dividedAsString = divided.ToString(System.Globalization.CultureInfo.InvariantCulture);
Assert.IsTrue(dividedAsString.Equals("1"));
// option 3, if we expect the decimal to fit in int64, convert to int64 and back
Int64 asInt64 = Decimal.ToInt64(scale2);
Decimal backToDecimal = new Decimal(asInt64);
string backToDecimalString = backToDecimal.ToString(System.Globalization.CultureInfo.InvariantCulture);
Assert.IsTrue(backToDecimalString.Equals("1"));
// option 4, convert to BigInteger then back using BigInteger's explicit conversion to decimal
BigInteger convertedToBigInteger = new BigInteger(scale2);
Decimal bigIntegerBackToDecimal = (Decimal)convertedToBigInteger;
string bigIntegerBackToDecimalString = bigIntegerBackToDecimal.ToString(System.Globalization.CultureInfo.InvariantCulture);
Assert.IsTrue(bigIntegerBackToDecimalString.Equals("1"));
So plenty of options, and certainly there are more. But which of these options are actually guaranteed to work?
Option 1: MSDN does not mention that the scale is changed when calling Truncate, so using this method seems to be relying on an implementation detail. Internally, Truncate calls FCallTruncate, for which I did not find any documentation.
Option 2 may be mandated by the CLI spec, but I did not find which exact specification that would be, I did not find it in the ECMA specification.
Option 3 (ToInt64 also uses FCallTruncate internally) will work judging by the reference source (the constructor taking an ulong sets flags and this scale to 0) but the documentation again makes no mention of scale.
Option 4, BigInteger calls Decimal.Truncate, with the comment:
// First truncate to get scale to 0 and extract bits
int[] bits = Decimal.GetBits(Decimal.Truncate(value));
So clearly Microsoft internally also thinks that Decimal.Truncate will set the scale to 0.
But I am looking for a method that is guaranteed to work without relying on implementation details and works for all the decimal where this can technically work (cannot rescale a Decimal.MaxValue for example). None of the options above seems to fit the bill for this requirement.
You pretty much answered your question yourself. Personally, I would not obsess so much about which method to use. If you method works now - even if it is undocumented - then it will most likely work in the future. And if a future update to .NET breaks your method then hopefully you have a test that will highlight this when you upgrade your application framework.
Going through your options:
1) Decimal.Truncate sets the scale to 0 but if it is undocumented then you may decide to not rely on this fact.
2) Dividing by 1.0000 ... may give you the desired result but it is not obvious what is going and if it is not documented then this is probably the worst option.
3 and 4) These options should work for you. You convert the decimal to an integer and then back to a decimal. Obviously, the decimal created from an integer has scale 0. Any other value would be wrong even though it is not explicitly documented. Option 4) is able to handle even Decimal.MaxValue and Decimal.MinValue.
Related
Although my question sounds trivial, it really is NOT. Hope you can help me.
I want to implement interval arithmetic in my .NET (C#) project. This means that every number is defined by an lower bound and an upper bound. This is helpfull for problems like
1 / 3 = 0.333333333333333 (15 significant digits)
since you would then have
1 / 3 = [ 0.33333333333333 , 0.333333333333334 ] (14 significant digits each)
, so I now FOR SURE that the right answer lays between those two numbers. Without the interval representation I would already have a rounding error with me (i.e. 0.0000000000000003).
To achieve this I wrote my own Interval type that overloads all standard operators like +-*/, etc. To make this type work correctly I need to be able to round the result of 1 / 3 in two directions. Rounding the result down will give me the lower bound for my interval, rounding the result up will give me the upper bound for my interval.
.NET has the Math.Round(double,int) method which rounds the double to int decimal places. Looks great but it can't be forced to round up/down. Math.Round(1.0/3.0,14) would round down, but the also needed up-rounding to 0.33...34 can't be achieved like this.
But there are Math.Ceil and Math.Floor you might say! Okay, those methods round to the next lower or upper integer. So if I want to round to 14 decimal places I first need to reform my result:
1 / 3 = 0.333333333333333 -> *E14 -> 33333333333333.3
So now I can call Math.Ceil and Math.Floor and get both rounded results after reforming back
33333333333333 & 33333333333334 -> /E14 -> 0.33333333333333 & 0.33333333333334
Looks great, but: Let's say my number goes near the double.MaxValue. I can't just *E14 a value near double.MaxValue since this will give me an OverflowException. So this is no solution either.
And, to top all of these facts: All this fails even harder when trying to round 0.9999999999999999999999999 (more than 15 digits) since the internal representation is already rounded to 1 before I can even start trying to round down.
I could try to somehow parse a string containing the double but this won't help since (1/3 * 3).ToString() will already print 1 instead of 0.99...9.
Decimal does not work either since I don't want that deep precision, 14 digits are enough; but I still want that double range!
In C++, where several interval arithmetic implementations exist, this problem could be solved by telling the processor dynamically to swith its roundmode to for example "always down" or "always up". I couldn't find any way to do this in .NET.
So, do you have any ideas?
Thanks in advance!
Assume nextDown(x) is a function that returns the largest double that is less than x, and nextUp(x) is a function that returns the smallest double that is greater than x. See Get next smallest Double number for implementation ideas.
Where you would have rounded a lower bound result down, instead use the nextDown of the round-to-nearest result. Where you would have rounded an upper bound up, use the nextUp of the round-to-nearest result.
This method ensures the interval continues to contain the exact real number result. It introduces extra rounding error - in some cases the lower bound will be one ULP smaller than it should be, and/or the upper bound will be one ULP bigger. However, it is a minimal widening of the interval, much less widening than you would get working in decimal or by suppressing low significance bits.
This might be more like a long comment than a real answer.
This code returns an "interval" (I just use Tuple<,>, you can use your own Interval type) based on truncating the seven least significant bits:
static Tuple<double, double> GetMinMaxIntervalBasedOnBinaryNumbersThatAreRoundOnLastSevenBits(double number)
{
if (double.IsInfinity(number) || double.IsNaN(number))
return Tuple.Create(number, number); // maybe treat this case differently
var i = BitConverter.DoubleToInt64Bits(number);
const int numberOfBitsToClear = 7; // your seven, can change this value, must be below 52
const long precision = 1L << numberOfBitsToClear;
const long bitMask = ~(precision - 1L);
//truncate i
i &= bitMask;
return Tuple.Create(BitConverter.Int64BitsToDouble(i), BitConverter.Int64BitsToDouble(i + precision));
}
Disclaimer: I am not sure if this is useful for any purpose. In particular not sure it is useful for interval arithmetic.
With this code, GetMinMaxIntervalBasedOnBinaryNumbersThatAreRoundOnLastSevenBits(1.0 / 3.0) returns the tuple (0.333333333333329, 0.333333333333336).
This code, just like the code you ask for in your question, has the obvious "issue" that if the original value is close to (or even equal to) one of the "round" numbers we use, then the returned interval is "skewed", with the original number being close to one of the ends of the interval. For example, with input 42.0 (already round), you get out the tuple (42, 42.0000000000009).
One good thing about this code is I expect it to be extremely fast.
I am maintaining a C# desktop application, on windows 7, using Visual Studio 2013. And somewhere in the code there is the following line, that tries to create a 0.01 decimal value, using a Decimal(Int32[]) constructor:
decimal d = new decimal(new int[] { 1, 0, 0, 131072 });
First question is, is it different from the following?
decimal d = 0.01M;
If it is not different, why the developer has gone through the trouble of coding like that?
I need to change this line in order to create dynamic values. Something like:
decimal d = (decimal) (1 / Math.Pow(10, digitNumber));
Am I going to cause some unwanted behavior this way?
It seems useful to me when the source of the decimal consists of bits.
The decimal used in .NET has an implementation that is based on a sequence of bit parameters (not just one stream of bits like with an int), so it can be useful to construct a decimal with bits when you communicate with other systems which return a decimal through a blob of bytes (a socket, from a piece of memory, etc).
It is easy now to convert the set of bits to a decimal now. No need for fancy conversion code. Also, you can construct a decimal from the inputs defined in the standard, which makes it convenient for testing the .NET framework too.
The decimal(int[] bits) constructor allows you to give a bitwise definition of the decimal you're creating bits must be a 4 int array where:
bits 0, 1, and 2 make up the 96-bit integer number.
bits 3 contains the scale factor and sign
It just allows you to get really precise with the definition of the decimal judging from your example I don't think you need that level of precision.
See here for more detail on using that constructor or here for other constructors that may be more appropriate for you
To more specifically answer your question if digitNumberis a 16bit exponent then decimal d = new decimal(new int[] { 1, 0, 0, digitNumber << 16 }); does what you want since the exponent goes in bits 16 - 23 of last int in the array
The definition in the xml is
//
// Summary:
// Initializes a new instance of System.Decimal to a decimal value represented
// in binary and contained in a specified array.
//
// Parameters:
// bits:
// An array of 32-bit signed integers containing a representation of a decimal
// value.
//
// Exceptions:
// System.ArgumentNullException:
// bits is null.
//
// System.ArgumentException:
// The length of the bits is not 4.-or- The representation of the decimal value
// in bits is not valid.
So for some unknown reason the original developer wanted to initialize his decimal this way. Maybe he was just wanted to confuse someone in the future.
It cant possibly affect your code if you change this to
decimal d = 0.01m;
because
(new decimal(new int[] { 1, 0, 0, 131072})) == 0.01m
You should exactly know how decimal stored in memory.
you can use this method to generate the desired value
public static decimal Base10FractionGenerator(int digits)
{
if (digits < 0 || digits > 28)
throw new ArgumentException($"'{nameof(digits)}' must be between 0 and 28");
return new decimal(new[] { 1, 0, 0, digits << 16 });
}
Use it like
Console.WriteLine(Base10FractionGenerator(0));
Console.WriteLine(Base10FractionGenerator(2));
Console.WriteLine(Base10FractionGenerator(5));
Here is the result
1
0.01
0.00001
The particular constructor you're talking about generates a decimal from four 32-bit values. Unfortunately, newer versions of the Common Language Infrastructure (CLI) leave its exact format unspecified (presumably to allow implementations to support different decimal formats) and now merely guarantee at least a specific precision and range of decimal numbers. However, earlier versions of the CLI do define that format exactly as Microsoft's implementation does, so it's probably kept that way in Microsoft's implementation for backward compatibility. However, it's not ruled out that other implementations of the CLI will interpret the four 32-bit values of the Decimal constructor differently.
Decimals are exact numerics, you can use == or != to test for equality.
Perhaps, this line of code comes from some other place where it made sense at some particular point of time.
I'd clean it up.
The following test will fail in C#
Assert.AreEqual<double>(10.0d, 16.1d - 6.1d);
The problem appears to be a floating point error.
16.1d - 6.1d == 10.000000000000002
This is causing me headaches in writing unit tests for code that uses double. Is there a way to fix this?
There is no exact conversion between the decimal system and the binary representation of a double (see excellent comment by #PatriciaShanahan below on why).
In this case the .1 part of the numbers is the problem, it cannot be finitely represented in a double (like 1/3 can't be finitely represented exactly as a decimal number).
A code snippet to explain what happends:
double larger = 16.1d; //Assign closest double representation of 16.1.
double smaller = 6.1; //Assign closest double representation of 6.1.
double diff = larger - smaller; //Assign closest diff between larger and
//smaller, but since a smaller value has a
//larger precision the result will have better
//precision than larger but worse than smaller.
//The difference shows up as the ...000002.
Always use the Assert.Equal overload which takes a delta parameter when comparing doubles.
Alternatively if you really need exact decimal conversion, use the decimal data type, that has another binary representation and would return exactly 10 in your example.
Floatingoint numbers are an estimate of the actual value based on an exponent so the test fails correctly. If you require exact equivalence in two decimal numbers you may need to check out the decimal data type.
If you are using NUnit please use the Within option. Here can you find additional information: http://www.nunit.org/index.php?p=equalConstraint&r=2.6.2.
I agree with anders abel. There won't be a way to do this using a float number representation. In direct result of IEE 1985-754 only the numbers that can be represented by
can be stored and calculated with precisly (as long as the chosen bit number allows this).
For Example : 1024 * 1.75 * 183.375 / 1040.0675 <-- will be stored precisly
10 / 1.1 <-- wont be stored precisly
If you are hardly interested in exact representation of rational numbers you could write your own number-implementation using fractions.
This could be done by saving numerator, denominator and sign. Then operations like multiply, subtract, etc. need to be implemented (very hard to ensure good performance). A toString()-method could look like this (I assume cachedRepresentation, cachedDotIndex and cachedNumerator to be member-variables)
public String getString(int digits) {
if(this.cachedRepresentation == ""){
this.cachedRepresentation += this.positiveSign ? "" : "-";
this.cachedRepresentation += this.numerator/this.denominator;
this.cachedNumerator = 10 * (this.numerator % this.denominator);
this.cachedDotIndex = this.cachedRepresentation.Length;
this.cachedRepresentation += ".";
}
if ((this.cachedDotIndex + digits) < this.cachedRepresentation.Length)
return this.cachedRepresentation.Substring(0, this.cachedDotIndex + digits + 1);
while((this.cachedDotIndex + digits) >= this.cachedRepresentation.Length){
this.cachedRepresentation += this.cachedNumerator / this.denominator;
this.cachedNumerator = 10 * (this.cachedNumerator % denominator);
}
return cachedRepresentation;
}
This worked for me. At the operations itself with long numbers I got some problems with too small datatypes (usually I don't use c#). I think for an experienced c#-developer it should be no problem to implement this without problems of to small datatypes.
If you want to implement this you should do minifications of the fraction at initializing and before operations using euclids greatest-common-divider.
Non rational numbers can (in every case I know) be specified by a algorithm that comes as close to the exact representation as you want (and computer allows).
This is so many times repeated at SO, but I would want to state my question explicitly.
How is a decimal which would look like 2.0100 "rightly" presented to the user as
another "decimal" 2.01?
I see a lot of questions on SO where the input is a string "2.0100" and need a decimal 2.01 out of it and questions where they need decimal 2.0100 to be represented as string "2.01". All this can be achieved by basic string.Trim, decimal.Parse etc. And these are some of the approaches followed:
decimal.Parse(2.0100.ToString("G29"))
Using # literal
Many string.Format options.
Various Regex options
My own one I used till now:
if (2.0100 == 0)
return 0;
decimal p = decimal.Parse(2.0100.ToString().TrimEnd('0'));
return p == 2.0100 ? p : 2.0100;
But I believe there has to be some correct way of doing it in .Net (at least 4) which deals with numeric operation and not string operation. I am asking for something that is not dealing with the decimal as string because I feel that ain't the right method to do this. I'm trying to learn something new. And would fancy my chances of seeing at least .1 seconds of performance gain since I'm pulling tens of thousands of decimal values from database :)
Question 2: If it aint present in .Net, which is the most efficient string method to get a presentable value for the decimal?
Edit: I do not just want a decimal to be presented it to users. In that case I can use it as a string. I do want it as decimal back. I will have to process on those decimal values later. So going by ToString approach, I first needs to convert it to string, and then again parse it to decimal. I am looking for something that doesn't deal with String class. Some option to convert decimal .20100 to decimal .201?
The "extra zeroes" that occur in a decimal value are there because the System.Decimal type stores those zeroes explicitly. For a System.Decimal, 1.23400 is a different value from 1.234, even though numerically they are equal:
The scaling factor also preserves any trailing zeroes in a Decimal number. Trailing zeroes do not affect the value of a Decimal number in arithmetic or comparison operations. However, trailing zeroes can be revealed by the ToString method if an appropriate format string is applied.
It's important to have the zeroes because many Decimal computations involve significant digits, which are a necessity of many scientific and high-precision calculations.
In your case, you don't care about them, but the appropriate answer is not "change Decimal for my particular application so that it doesn't store those zeroes". Instead, it's "present this value in a way that's meaningful to my users". And that's what decimal.ToString() is for.
The easiest way to format a decimal in a given format for the user is to use decimal.ToString()'s formatting options.
As for representing the value, 2.01 is equal to 2.0100. As long as you're within decimal's precision, it shouldn't matter how the value is stored in the system. You should only be worried with properly formatting the value for the user.
Numbers are numbers and strings are strings. The concept of "two-ness" represented as a string in the English language is 2. The concept of "two-ness" represented as a number is not really possibly to show because when you observe a number you see it as a string. But for the sake of argument it could be 2 or 2.0 or 02 or 02.0 or even 10/5. These are all representations of "two-ness".
Your database isn't actually returning 2.0100, something that you are inspecting that value with is converting it to a string and representing it that way for you. Whether a number has zeros at the end of it is merely a preference of string formatting, always.
Also, never call Decimal.parse() on a decimal, it doesn't make sense. If you want to convert a decimal literal to a string just call (2.0100).ToString().TrimEnd('0')
As noted, a decimal that internally stores 2.0100 could differ from one that stores 2.01, and the default behaviour of ToString() can be affected.
I recommend that you never make use of this.
Firstly, decimal.Parse("2.0100") == decimal.Parse("2.01") returns true. While their internal representations are different this is IMO unfortunate. When I'm using decimal with a value of 2.01 I want to be thinking:
2.01
Not:
struct decimal
{
private int flags;
private int hi;
private int lo;
private int mid;
/methods that make this actually useful/
}
While different means of storing 2.01 in the above structure might exist, 2.01 remains 2.01.
If you care about it being presented as 2.01 and not as 2.0 or 2.0100 then you care about a string representation. Your concern is about how a decimal is represented as a string, and that is how you should think about it at that stage. Consider the rule in question (minimum and maximum significant figures shown, and whether to include or exclude trailing zeros) and then code a ToString call appropriate.
And do this close to where the string is used.
When you care about 2.01, then deal with it as a decimal, and consider any code where the difference between 2.01 and 2.0100 matters to be a bug, and fix it.
Have a clear split in your code between where you are using strings, and where you are using decimals.
Ok, so I'm answering for myself, I got a solution.
return d / 1.00000000000000000000000000000m
That just does it. I did some benchmarking as well (time presented as comments are mode, not mean):
Method:
internal static double Calculate(Action act)
{
Stopwatch sw = new Stopwatch();
sw.Start();
act();
sw.Stop();
return sw.Elapsed.TotalSeconds;
}
Candidates:
return decimal.Parse(string.Format("{0:0.#############################}", d));
//0.02ms
return decimal.Parse(d.ToString("0.#############################"));
//0.014ms
if (d == 0)
return 0;
decimal p = decimal.Parse(d.ToString().TrimEnd('0').TrimEnd('.'));
return p == d ? p : d;
//0.016ms
return decimal.Parse(d.ToString("G29"));
//0.012ms
return d / 1.00000000000000000000000000000m;
//0.007ms
Needless to cover regex options. I dont mean to say performance makes a lot of difference. I'm just pulling 5k to 20k rows at a time. But still it's nice to know a simpler and cleaner alternative to string approach exists.
If you liked the answer, pls redirect your votes to here or here or here.
I'm messing around with Fourier transformations. Now I've created a class that does an implementation of the DFT (not doing anything like FFT atm). This is the implementation I've used:
public static Complex[] Dft(double[] data)
{
int length = data.Length;
Complex[] result = new Complex[length];
for (int k = 1; k <= length; k++)
{
Complex c = Complex.Zero;
for (int n = 1; n <= length; n++)
{
c += Complex.FromPolarCoordinates(data[n-1], (-2 * Math.PI * n * k) / length);
}
result[k-1] = 1 / Math.Sqrt(length) * c;
}
return result;
}
And these are the results I get from Dft({2,3,4})
Well it seems pretty okay, since those are the values I expect. There is only one thing I find confusing. And it all has to do with the rounding of doubles.
First of all, why are the first two numbers not exactly the same (0,8660..443 8 ) vs (0,8660..443). And why can't it calculate a zero, where you'd expect it. I know 2.8E-15 is pretty close to zero, but well it's not.
Anyone know how these, marginal, errors occur and if I can and want to do something about it.
It might seem that there's not a real problem, because it's just small errors. However, how do you deal with these rounding errors if you're for example comparing 2 values.
5,2 + 0i != 5,1961524 + i2.828107*10^-15
Cheers
I think you've already explained it to yourself - limited precision means limited precision. End of story.
If you want to clean up the results, you can do some rounding of your own to a more reasonable number of siginificant digits - then your zeros will show up where you want them.
To answer the question raised by your comment, don't try to compare floating point numbers directly - use a range:
if (Math.Abs(float1 - float2) < 0.001) {
// they're the same!
}
The comp.lang.c FAQ has a lot of questions & answers about floating point, which you might be interested in reading.
From http://support.microsoft.com/kb/125056
Emphasis mine.
There are many situations in which precision, rounding, and accuracy in floating-point calculations can work to generate results that are surprising to the programmer. There are four general rules that should be followed:
In a calculation involving both single and double precision, the result will not usually be any more accurate than single precision. If double precision is required, be certain all terms in the calculation, including constants, are specified in double precision.
Never assume that a simple numeric value is accurately represented in the computer. Most floating-point values can't be precisely represented as a finite binary value. For example .1 is .0001100110011... in binary (it repeats forever), so it can't be represented with complete accuracy on a computer using binary arithmetic, which includes all PCs.
Never assume that the result is accurate to the last decimal place. There are always small differences between the "true" answer and what can be calculated with the finite precision of any floating point processing unit.
Never compare two floating-point values to see if they are equal or not- equal. This is a corollary to rule 3. There are almost always going to be small differences between numbers that "should" be equal. Instead, always check to see if the numbers are nearly equal. In other words, check to see if the difference between them is very small or insignificant.
Note that although I referenced a microsoft document, this is not a windows problem. It's a problem with using binary and is in the CPU itself.
And, as a second side note, I tend to use the Decimal datatype instead of double: See this related SO question: decimal vs double! - Which one should I use and when?
In C# you'll want to use the 'decimal' type, not double for accuracy with decimal points.
As to the 'why'... repsensenting fractions in different base systems gives different answers. For example 1/3 in a base 10 system is 0.33333 recurring, but in a base 3 system is 0.1.
The double is a binary value, at base 2. When converting to base 10 decimal you can expect to have these rounding errors.