C#: Decimal -> Largest integer that can be stored exactly - c#

I've been looking at the decimal type for some possible programming fun in the near future, and would like to use it a a bigger integer than an Int64. One key point is that I need to find out the largest integer that I can store safely into the decimal (without losing precision); I say this, because apparently it uses some floating point in there, and as programmers, we all know the love that only floating point can give us.
So, does anyone know the largest int I can shove in there?
And on a separate note, does anyone have any experience playing with arrays of longs / ints? It's for the project following this one.
Thanks,
-R

decimal works differently to float and double - it always has enough information to store as far as integer precision, as the maximum exponent is 28 and it has 28-29 digits of precision. You may be interested in my articles on decimal floating point and binary floating point on .NET to look at the differences more closely.
So you can store any integer in the range [decimal.MinValue, decimal.MaxValue] without losing any precision.
If you want a wider range than that, you should use BigInteger as Fredrik mentioned (assuming you're on .NET 4, of course... I believe there are 3rd party versions available for earlier versions of .NET).

If you want to deal with big integers, you might want to look into the BigInteger type.
From the documentation:
Represents an arbitrarily large signed integer

Related

Value of a double variable not exact after multiplying with 100 [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
If I execute the following expression in C#:
double i = 10*0.69;
i is: 6.8999999999999995. Why?
I understand numbers such as 1/3 can be hard to represent in binary as it has infinite recurring decimal places but this is not the case for 0.69. And 0.69 can easily be represented in binary, one binary number for 69 and another to denote the position of the decimal place.
How do I work around this? Use the decimal type?
Because you've misunderstood floating point arithmetic and how data is stored.
In fact, your code isn't actually performing any arithmetic at execution time in this particular case - the compiler will have done it, then saved a constant in the generated executable. However, it can't store an exact value of 6.9, because that value cannot be precisely represented in floating point point format, just like 1/3 can't be precisely stored in a finite decimal representation.
See if this article helps you.
why doesn't the framework work around this and hide this problem from me and give me the
right answer,0.69!!!
Stop behaving like a dilbert manager, and accept that computers, though cool and awesome, have limits. In your specific case, it doesn't just "hide" the problem, because you have specifically told it not to. The language (the computer) provides alternatives to the format, that you didn't choose. You chose double, which has certain advantages over decimal, and certain downsides. Now, knowing the answer, you're upset that the downsides don't magically disappear.
As a programmer, you are responsible for hiding this downside from managers, and there are many ways to do that. However, the makers of C# have a responsibility to make floating point work correctly, and correct floating point will occasionally result in incorrect math.
So will every other number storage method, as we do not have infinite bits. Our job as programmers is to work with limited resources to make cool things happen. They got you 90% of the way there, just get the torch home.
And 0.69 can easily be represented in
binary, one binary number for 69 and
another to denote the position of the
decimal place.
I think this is a common mistake - you're thinking of floating point numbers as if they are base-10 (i.e decimal - hence my emphasis).
So - you're thinking that there are two whole-number parts to this double: 69 and divide by 100 to get the decimal place to move - which could also be expressed as:
69 x 10 to the power of -2.
However floats store the 'position of the point' as base-2.
Your float actually gets stored as:
68999999999999995 x 2 to the power of some big negative number
This isn't as much of a problem once you're used to it - most people know and expect that 1/3 can't be expressed accurately as a decimal or percentage. It's just that the fractions that can't be expressed in base-2 are different.
but why doesn't the framework work around this and hide this problem from me and give me the right answer,0.69!!!
Because you told it to use binary floating point, and the solution is to use decimal floating point, so you are suggesting that the framework should disregard the type you specified and use decimal instead, which is very much slower because it is not directly implemented in hardware.
A more efficient solution is to not output the full value of the representation and explicitly specify the accuracy required by your output. If you format the output to two decimal places, you will see the result you expect. However if this is a financial application decimal is precisely what you should use - you've seen Superman III (and Office Space) haven't you ;)
Note that it is all a finite approximation of an infinite range, it is merely that decimal and double use a different set of approximations. The advantage of decimal is it produces the same approximations that you would if you were performing the calculation yourself. For example if you calculated 1/3, you would eventually stop writing 3's when it was 'good enough'.
For the same reason that 1 / 3 in a decimal systems comes out as 0.3333333333333333333333333333333333333333333 and not the exact fraction, which is infinitely long.
To work around it (e.g. to display on screen) try this:
double i = (double) Decimal.Multiply(10, (Decimal) 0.69);
Everyone seems to have answered your first question, but ignored the second part.

Why is the division result between two integers truncated?

All experienced programmers in C# (I think this comes from C) are used to cast on of the integers in a division to get the decimal / double / float result instead of the int (the real result truncated).
I'd like to know why is this implemented like this? Is there ANY good reason to truncate the result if both numbers are integer?
C# traces its heritage to C, so the answer to "why is it like this in C#?" is a combination of "why is it like this in C?" and "was there no good reason to change?"
The approach of C is to have a fairly close correspondence between the high-level language and low-level operations. Processors generally implement integer division as returning a quotient and a remainder, both of which are of the same type as the operands.
(So my question would be, "why doesn't integer division in C-like languages return two integers", not "why doesn't it return a floating point value?")
The solution was to provide separate operations for division and remainder, each of which returns an integer. In the context of C, it's not surprising that the result of each of these operations is an integer. This is frequently more accurate than floating-point arithmetic. Consider the example from your comment of 7 / 3. This value cannot be represented by a finite binary number nor by a finite decimal number. In other words, on today's computers, we cannot accurately represent 7 / 3 unless we use integers! The most accurate representation of this fraction is "quotient 2, remainder 1".
So, was there no good reason to change? I can't think of any, and I can think of a few good reasons not to change. None of the other answers has mentioned Visual Basic which (at least through version 6) has two operators for dividing integers: / converts the integers to double, and returns a double, while \ performs normal integer arithmetic.
I learned about the \ operator after struggling to implement a binary search algorithm using floating-point division. It was really painful, and integer division came in like a breath of fresh air. Without it, there was lots of special handling to cover edge cases and off-by-one errors in the first draft of the procedure.
From that experience, I draw the conclusion that having different operators for dividing integers is confusing.
Another alternative would be to have only one integer operation, which always returns a double, and require programmers to truncate it. This means you have to perform two int->double conversions, a truncation and a double->int conversion every time you want integer division. And how many programmers would mistakenly round or floor the result instead of truncating it? It's a more complicated system, and at least as prone to programmer error, and slower.
Finally, in addition to binary search, there are many standard algorithms that employ integer arithmetic. One example is dividing collections of objects into sub-collections of similar size. Another is converting between indices in a 1-d array and coordinates in a 2-d matrix.
As far as I can see, no alternative to "int / int yields int" survives a cost-benefit analysis in terms of language usability, so there's no reason to change the behavior inherited from C.
In conclusion:
Integer division is frequently useful in many standard algorithms.
When the floating-point division of integers is needed, it may be invoked explicitly with a simple, short, and clear cast: (double)a / b rather than a / b
Other alternatives introduce more complication both the programmer and more clock cycles for the processor.
Is there ANY good reason to truncate the result if both numbers are integer?
Of course; I can think of a dozen such scenarios easily. For example: you have a large image, and a thumbnail version of the image which is 10 times smaller in both dimensions. When the user clicks on a point in the large image, you wish to identify the corresponding pixel in the scaled-down image. Clearly to do so, you divide both the x and y coordinates by 10. Why would you want to get a result in decimal? The corresponding coordinates are going to be integer coordinates in the thumbnail bitmap.
Doubles are great for physics calculations and decimals are great for financial calculations, but almost all the work I do with computers that does any math at all does it entirely in integers. I don't want to be constantly having to convert doubles or decimals back to integers just because I did some division. If you are solving physics or financial problems then why are you using integers in the first place? Use nothing but doubles or decimals. Use integers to solve finite mathematics problems.
Calculating on integers is faster (usually) than on floating point values. Besides, all other integer/integer operations (+, -, *) return an integer.
EDIT:
As per the request of the OP, here's some addition:
The OP's problem is that they think of / as division in the mathematical sense, and the / operator in the language performs some other operation (which is not the math. division). By this logic they should question the validity of all other operations (+, -, *) as well, since those have special overflow rules, which is not the same as would be expected from their math counterparts. If this is bothersome for someone, they should find another language where the operations perform as expected by the person.
As for the claim on perfomance difference in favor of integer values: When I wrote the answer I only had "folk" knowledge and "intuition" to back up the claim (hece my "usually" disclaimer). Indeed as Gabe pointed out, there are platforms where this does not hold. On the other hand I found this link (point 12) that shows mixed performances on an Intel platform (the language used is Java, though).
The takeaway should be that with performance many claims and intuition are unsubstantiated until measured and found true.
Yes, if the end result needs to be a whole number. It would depend on the requirements.
If these are indeed your requirements, then you would not want to store a decimal and then truncate it. You would be wasting memory and processing time to accomplish something that is already built-in functionality.
The operator is designed to return the same type as it's input.
Edit (comment response):
Why? I don't design languages, but I would assume most of the time you will be sticking with the data types you started with and in the remaining instance, what criteria would you use to automatically assume which type the user wants? Would you automatically expect a string when you need it? (sincerity intended)
If you add an int to an int, you expect to get an int. If you subtract an int from an int, you expect to get an int. If you multiple an int by an int, you expect to get an int. So why would you not expect an int result if you divide an int by an int? And if you expect an int, then you will have to truncate.
If you don't want that, then you need to cast your ints to something else first.
Edit: I'd also note that if you really want to understand why this is, then you should start looking into how binary math works and how it is implemented in an electronic circuit. It's certainly not necessary to understand it in detail, but having a quick overview of it would really help you understand how the low-level details of the hardware filter through to the details of high-level languages.

Is there a Math API for Pow(decimal, decimal)

Is there a library for decimal calculation, especially the Pow(decimal, decimal) method? I can't find any.
It can be free or commercial, either way, as long as there is one.
Note: I can't do it myself, can't use for loops, can't use Math.Pow, Math.Exp or Math.Log, because they all take doubles, and I can't use doubles. I can't use a serie because it would be as precise as doubles.
One of the multipliyers is a rate : 1/rate^(days/365).
The reason there is no decimal power function is because it would be pointless to use decimal for that calculation. Use double.
Remember, the point of decimal is to ensure that you get exact arithmetic on values that can be exactly represented as short decimal numbers. For reasonable values of rate and days, the values of any of the other subexpressions are clearly not going to be exactly represented as short decimal values. You're going to be dealing with inexact values, so use a type designed for fast calculations of slightly inexact values, like double.
The results when computed in doubles are going to be off by a few billionths of a penny one way or the other. Who cares? You'll round out the error later. Do the rate calculation in doubles. Once you have a result that needs to be turned back into a currency again, multiply the result by ten thousand, round it off to the nearest integer, convert that to a decimal, and then divide it out by ten thousand again, and you'll have a result accurate to four decimal places, which ought to be plenty for a financial calculation.
Here is what I used.
output = (decimal)Math.Pow((double)var1, (double)var2);
Now I'm just learning but this did work but I don't know if I can explain it correctly.
what I believe this does is take the input of var1 and var2 and cast them to doubles to use as the argument for the math.pow method. After that have (decimal) in front of math.pow take the value back to a decimal and place the value in the output variable.
I hope someone can correct me if my explination is wrong but all I know is that it worked for me.
I know this is an old thread but I'm putting this here in case someone finds it when searching for a solution.
If you don't want to mess around with casting and doing you own custom implementation you can install the NuGet DecimalMath.DecimalEx and use it like DecimalEx.Pow(number,power).
Well, here is the Wikipedia page that lists current C# numerics libraries. But TBH I don't think there is a lot of support for decimals
http://en.wikipedia.org/wiki/List_of_numerical_libraries
It's kind of inappropriate to use decimals for this kind of calculation in general. It's high precision yes - but it's also low range. As the MSDN docs state it's for financial/monetary calculations - where there isn't much call for POW unfortunately!
Of course you might have a specific problem domain that needs super high precision and all numbers are within 10(28) - 10(-28). But in that case you will probably just need to write your own series calculator such as the one linked to in the comments to the question.
Not using decimal. Use double instead. According to this thread, the Math.Pow(double, double) is called directly from CLR.
How is Math.Pow() implemented in .NET Framework?
Here is what .NET Framework 4 has (2 lines only)
[SecuritySafeCritical]
public static extern double Pow(double x, double y);
64-bit decimal is not native in this 32-bit CLR yet. Maybe on 64-bit Framework in the future?
wait, huh? why can't you use doubles? you could always cast if you're using ints or something:
int a = 1;
int b = 2;
int result = (int)Math.Pow(a,b);

Why is System.Math and for example MathNet.Numerics based on double?

All the methods in System.Math takes double as parameters and returns parameters. The constants are also of type double. I checked out MathNet.Numerics, and the same seems to be the case there.
Why is this? Especially for constants. Isn't decimal supposed to be more exact? Wouldn't that often be kind of useful when doing calculations?
This is a classic speed-versus-accuracy trade off.
However, keep in mind that for PI, for example, the most digits you will ever need is 41.
The largest number of digits of pi
that you will ever need is 41. To
compute the circumference of the
universe with an error less than the
diameter of a proton, you need 41
digits of pi †. It seems safe to
conclude that 41 digits is sufficient
accuracy in pi for any circle
measurement problem you're likely to
encounter. Thus, in the over one
trillion digits of pi computed in
2002, all digits beyond the 41st have
no practical value.
In addition, decimal and double have a slightly different internal storage structure. Decimals are designed to store base 10 data, where as doubles (and floats), are made to hold binary data. On a binary machine (like every computer in existence) a double will have fewer wasted bits when storing any number within its range.
Also consider:
System.Double 8 bytes Approximately ±5.0e-324 to ±1.7e308 with 15 or 16 significant figures
System.Decimal 12 bytes Approximately ±1.0e-28 to ±7.9e28 with 28 or 29 significant figures
As you can see, decimal has a smaller range, but a higher precision.
No, - decimals are no more "exact" than doubles, or for that matter, any type. The concept of "exactness", (when speaking about numerical representations in a compuiter), is what is wrong. Any type is absolutely 100% exact at representing some numbers. unsigned bytes are 100% exact at representing the whole numbers from 0 to 255. but they're no good for fractions or for negatives or integers outside the range.
Decimals are 100% exact at representing a certain set of base 10 values. doubles (since they store their value using binary IEEE exponential representation) are exact at representing a set of binary numbers.
Neither is any more exact than than the other in general, they are simply for different purposes.
To elaborate a bit furthur, since I seem to not be clear enough for some readers...
If you take every number which is representable as a decimal, and mark every one of them on a number line, between every adjacent pair of them there is an additional infinity of real numbers which are not representable as a decimal. The exact same statement can be made about the numbers which can be represented as a double. If you marked every decimal on the number line in blue, and every double in red, except for the integers, there would be very few places where the same value was marked in both colors.
In general, for 99.99999 % of the marks, (please don't nitpick my percentage) the blue set (decimals) is a completely different set of numbers from the red set (the doubles).
This is because by our very definition for the blue set is that it is a base 10 mantissa/exponent representation, and a double is a base 2 mantissa/exponent representation. Any value represented as base 2 mantissa and exponent, (1.00110101001 x 2 ^ (-11101001101001) means take the mantissa value (1.00110101001) and multiply it by 2 raised to the power of the exponent (when exponent is negative this is equivilent to dividing by 2 to the power of the absolute value of the exponent). This means that where the exponent is negative, (or where any portion of the mantissa is a fractional binary) the number cannot be represented as a decimal mantissa and exponent, and vice versa.
For any arbitrary real number, that falls randomly on the real number line, it will either be closer to one of the blue decimals, or to one of the red doubles.
Decimal is more precise but has less of a range. You would generally use Double for physics and mathematical calculations but you would use Decimal for financial and monetary calculations.
See the following articles on msdn for details.
Double
http://msdn.microsoft.com/en-us/library/678hzkk9.aspx
Decimal
http://msdn.microsoft.com/en-us/library/364x0z75.aspx
Seems like most of the arguments here to "It does not do what I want" are "but it's faster", well so is ANSI C+Gmp library, but nobody is advocating that right?
If you particularly want to control accuracy, then there are other languages which have taken the time to implement exact precision, in a user controllable way:
http://www.doughellmann.com/PyMOTW/decimal/
If precision is really important to you, then you are probably better off using languages that mathematicians would use. If you do not like Fortran then Python is a modern alternative.
Whatever language you are working in, remember the golden rule:
Avoid mixing types...
So do convert a and b to be the same before you attempt a operator b
If I were to hazard a guess, I'd say those functions leverage low-level math functionality (perhaps in C) that does not use decimals internally, and so returning a decimal would require a cast from double to decimal anyway. Besides, the purpose of the decimal value type is to ensure accuracy; these functions do not and cannot return 100% accurate results without infinite precision (e.g., irrational numbers).
Neither Decimal nor float or double are good enough if you require something to be precise. Furthermore, Decimal is so expensive and overused out there it is becoming a regular joke.
If you work in fractions and require ultimate precision, use fractions. It's same old rule, convert once and only when necessary. Your rounding rules too will vary per app, domain and so on, but sure you can find an odd example or two where it is suitable. But again, if you want fractions and ultimate precision, the answer is not to use anything but fractions. Consider you might want a feature of arbitrary precision as well.
The actual problem with CLR in general is that it is so odd and plain broken to implement a library that deals with numerics in generic fashion largely due to bad primitive design and shortcoming of the most popular compiler for the platform. It's almost the same as with Java fiasco.
double just turns out to be the best compromise covering most domains, and it works well, despite the fact MS JIT is still incapable of utilising a CPU tech that is about 15 years old now.
[piece to users of MSDN slowdown compilers]
Double is a built-in type. Is is supported by FPU/SSE core (formerly known as "Math coprocessor"), that's why it is blazingly fast. Especially at multiplication and scientific functions.
Decimal is actually a complex structure, consisting of several integers.

How to fix precision of variable

In c#
double tmp = 3.0 * 0.05;
tmp = 0.15000000000000002
This has to do with money. The value is really $0.15, but the system wants to round it up to $0.16. 0.151 should probably be rounded up to 0.16, but not 0.15000000000000002
What are some ways I can get the correct numbers (ie 0.15, or 0.16 if the decimal is high enough).
Use a fixed-point variable type, or a base ten floating point type like Decimal. Floating point numbers are always somewhat inaccurate, and binary floating point representations add another layer of inaccuracy when they convert to/from base two.
Money should be stored as decimal, which is a floating decimal point type. The same goes for other data which really is discrete rather than continuous, and which is logically decimal in nature.
Humans have a bias to decimal for obvious reasons, so "artificial" quantities such as money tend to be more appropriate in decimal form. "Natural" quantities (mass, height) are on a more continuous scale, which means that float/double (which are floating binary point types) are often (but not always) more appropriate.
In Patterns of Enterprise Application Architecture, Martin Fowler recommends using a Money abstraction
http://martinfowler.com/eaaCatalog/money.html
Mostly he does it for dealing with Currency, but also precision.
You can see a little of it in this Google Book search result:
http://books.google.com/books?id=FyWZt5DdvFkC&pg=PT520&lpg=PT520&dq=money+martin+fowler&source=web&ots=eEys-C_vdA&sig=jckdxgMLSRJtGDYZtcbYST1ak8M&hl=en&sa=X&oi=book_result&resnum=6&ct=result
'decimal' type was designed especially for this
A decimal data type would work well and is probably your choice.
However, in the past I've been able to do this in an optimized way using fixed point integers. It's ideal for high performance computations where decimal bogs down and you can't have the small precision errors of float.
Start with, say an Int32, and split in half. First half is whole number portion, second half is fractional portion. You get 16-bits of signed integer plus 16 bits of fractional precision. e.g. 1.5 as an 16:16 fixed point would be represented as 0x00018000. Or, alter the distribution of bits to suit your needs.
Fixed point numbers can generally be added/sub/mul/div like any other integer, with a little bit of work around mul/div to avoid overflows.
What you faced is a rounding problem, which I had mentioned earlier in another post
Can I use “System.Currency” in .NET?
And refer to this as well Rounding

Categories

Resources