I have a problem that I need advice: I have to make calculations with big numbers, in the range of (plus/minus, signed); integer part: 70*(10^27) and accuracy, decimal part: 9*(10^-31). Most of the times I only simple simple operations (add/subtr/mult/div) where I could ignore most digits (use only 8 decimals) of the decimal part - however, in many cases, I would have to take the 'whole' decimal and do calculations with that precision (and store the result which is used in subsequent calculations).
An example of a number:
66898832014839425790021345548 . 8499970865478385639546957014538
I saw the articles on decimal vs long etc. Should I use a decimal or should a custom type be made? If yes on the later, how may I do simple arithmetic operations? (Roundation of the last decimal only is acceptable)
My projects are all in C# and SQL Server; thank you very much in advance.
There is no standard implementation in C#, but you can create your own library based on BigInteger like
public BigDecimal(BigInteger integer, BigInteger scale)
You can also reference some 3rd-party libraries like GMP with its .Net forks/ports like Math.Gmp.Native.NET, libgmp, etc.
There are some custom libs, as Franz Gleichmann already mentioned in his comment: BigDecimal, AngouriMath
For SQL Server most of the libraries use strings to store such kind of data. For instance, in Java there is BigDecimal and it is mapped to string via JDBC.
Related
I need to be able to use the standard math functions on decimal numbers. Accuracy is very important. double is not an acceptable substitution. How can math operations be implemented with decimal numbers in C#?
edit
I am using the System.Decimal. My issue is that System.Math does not work with System.Decimal. For example, the following functions do not work with System.Decimal:
System.Math.Pow
System.Math.Log
System.Math.Sqrt
Well, Double uses floating point math which isn't what you're after unless you're doing trigonometry for 3D graphics or something.
If you need to do simple math operations like division, you should use System.Decimal.
From MSDN: The decimal keyword denotes a 128-bit data type. Compared to floating-point types, the decimal type has a greater precision and a smaller range, which makes it suitable for financial and monetary calculations.
Update: After some discussion, the problem is that you want to work with Decimals, but System.Math only takes Doubles for several key pieces of functionality. Sadly, you are working with high precision numbers, and since Decimal is 128 bit and Double is only 64, the conversion results in a loss of precision.
Apparently there are some possible plans to make most of System.Math handle Decimal, but we aren't there yet.
I googled around a bit for math libraries and compiled this list:
Mathdotnet, A mathematical open source (MIT/X11, LGPL & GPL) library written in C#/.Net, aiming to provide a self contained clean framework for symbolic algebraic and numerical / scientific computations.
Extreme Optimization Mathematics Library for .NET (paid)
DecimalMath A relative newcomer, this one advertises itself as: Portable math support for Decimal that Microsoft forgot and more. Sounds promising.
DecimalMath contains all functions in System.Math class with decimal argument analogy
Note : it is my library and also contains some examples in it
You haven't given us nearly enough information to answer the question.
decimal and double are both inaccurate. The representation error of decimals is zero when the quantity being represented is exactly equal to a fraction of the form (x/10n) for suitable choices of x and n. The representation error of doubles is zero when the quantity is exactly equal to a fraction of the form (x/2n) again for suitable choices of x and n.
If the quantities you are dealing with are not fractions of that form then you will get some representation error, period. In particular, you mention taking square roots. Many square roots are irrational numbers; they have no fractional form, so any representation format that uses fractions is going to give small errors.
Can you explain what you are doing in hugely more detail?
For representing money I know it's best to use C#'s decimal data type to double or float, however if you are working with amounts of money less than millions wouldn't int be a better option?
Does decimal have perfect calculation precision, i.e. it doesn't suffer from Why computers are bad at numbers? If there is still some possibility of making a calculation mistake, wouldn't it be better to use int and just display the value with a decimal separator?
The amounts you are working with, "less than millions" in your example, isn't the issue. It's what you want to do with the values and how much precision you need for that. And treating those numbers as integers doesn't really help - that just puts the onus on you to keep track of the precision. If precision is critical then there are libraries to help; BigInteger and BigDecimal packages are available in a variety of languages, and there are libraries that place no limit on the precision Wikipedia has a list The important takeaway is to know your problem space and how much precision you need. For many things the built in precision is fine, when it's not you have other options.
Like li223 said, integer won't allow you to save values with decimal cases - and the majority of currencies allow decimal values.
I would advise to pick a number of decimal cases to work with, and avoid the problem that you refered ("Why computers are bad at numbers"). I work with invocing and we use 8 decimal cases, and works fine with all currencies so far.
The main reason to use decimal over integer in this case is that decimal, well, allows decimal places IE: £2.50 can be properly represented as 2.5 with a decimal. Whereas if you used an integer you can't represent decimal points. This is fine if, like John mentions in their comment, you're representing a currency like Japanese Yen that don't have decimals.
As for decimal's accuracy, it still suffers from "Why are computers bad at numbers" see the answer to this question for more info.
just simple question about JS/C# floats. I am making multiplayer game and normally have to synchronize stuff between client and server. Now the question, are C# floats and Javascript floats same data type? Like, can I send one to another and it will understand each other. I will send floats with scientific notations because I think this way it will be the shortest and most precise. Unless you guys have any other ideas :)
Thanks in advance.
A C# double and a JavaScript Number are the same thing, both are double-precision (64-bit) IEEE-754 binary floating point numbers ("binary64"). (A C# float is just single-precision [32-bit, "binary32"], so if you want the same thing JavaScript has, use double, not float.)
Side note: Although they're the same number type, their respective "to string" operations are slightly different. For instance, given the number 0.87090686143883822 (which is really 0.8709068614388382201241256552748382091522216796875, the nearest value IEEE-754 binary64 can hold), the "to string" operation from C#, JavaScript, and Java (which also uses binary64 for its double) are:
0.870906861438838 - C#'s ToString()
0.87090686143883822 - C#'s ToString("R")
0.8709068614388382 - JavaScript's toString()
0.8709068614388382 - Java's String.valueOf(double)
I don't know the rules for C#, but JavaScript and Java both default to including only as many digits as are required to distinguish the number from its nearest representable neighbor. C#'s ToString() doesn't do that (0.870906861438838 converts to 0.870906861438838, precisely, losing the remaining 0.0000000000000002201241256552748382091522216796875). C#'s ToString("R") includes an unnecessary additional digit.
I will send floats with scientific notations
Why not send data using JSON? It works rather well and decouples you from having to invent a new transport format.
I'm using System.Numerics.BigInteger in .Net 4.0 and BigRational class from BCL to build a math parser/calculator application. The goal is to write a fully functional math parser with support for big numbers ... So I need to use math functions. But unfortunately all System.Math functions return typical data types like float and double so are not very accurate. I need more precision. I dug into microlib.dll but for sine function, just found this:
[SecuritySafeCritical, __DynamicallyInvokable]
[MethodImpl(MethodImplOptions.InternalCall)]
public static extern double Sin(double a);
I know that many math function are not implemented in .Net and come directly from hardware codes. So can I use those function and get high precision or big integers? If not, what's the best way to implement them myself? (performance is also important) Any resources or pointing to the right direction would be appreciated!
I wrote a high precision float class (HPF) in MATLAB. And, yes, it IS practical to do the computations asked about here in thousands of digits, at least within limits. Don't expect the result to be lightning fast for numbers that massive.
Here, in well under a second of CPU time, compute sin(0.5) to 2000 decimal digits using HPF.
x = hpf('0.5',2000);
z = sin(x)
z =
0.47942553860420300027328793521557138808180336794060067518861661312553500028781483220963127468434826908613209108450571741781109374860994028278015396204619192460995729393228140053354633818805522859567013569985423363912107172077738015297987137716951517618072114969807370147476869703198703900097339549102989443417733111109673903936124163653480401918346314376284392645260157071283092766006791017533631162287616795734840371866817730333179872034064567347182994506824663612455463453278289361244779536601735462820464717823776898881644512826197840291735466150683689733147287397488788190207928799138423095503817584705030067646428267136203352514539875309014204847017729272889212301417866971280026511717607919387379654420848964303389447566823572876762597714624447000807836928214941991138743810551646471072080462812247422335610868323144633547779337371136437454965479015122728507221582125562761335681781172799521300086891593889552064797344909502979313524137777091507360571026506015248874581726210924892801291055435819896189522803930563792190652684778508854934451273978032859742747386701227727154948654357881637851140514356687525131655792391290065314050467763961605300872097475383191474571466991222453822643126018869834327176291251787779457463370925032134676572752244926564204875494171901976363708322142014379355418299630547673437168013019784069157658698329043158470653971407921567047204742130833307984199944961246141304498844116424471800555566374594078227611966253268668739369977542338090766178818446935337871719545939020589010000184922392803416567433189354514503108047619925727424426280213643488597421990337636199906535549697075412246167977122862009545754093682493517801100883291428841032100118426615836052047298714537824867973933776850058028935197623983399376280971742853670048048344682272994976197375983973258649430222855535025176957323557911906997589014243194056649766589116017811954178461482380269627190632898835306576210057831124120168311609126946042808735584921993653157751619630908157551923919459017792007414
asin(z)
ans =
0.5
asin(z) - x
ans =
3.e-2004
I wrote HPF essentially from scratch, without recourse to an existing code like the Java BigDecimal class, which I did test out. In fact, I wrote the entire class several times, once as an overlay on java.math.BigDecimal class. I found I did not like their implementation, so I started over and wrote it myself. You can find details in a .pdf file included in the zip.
Having said all that, I spent literally man-months in that effort, learning various tricks to tease many thousands of digits from a series implementation.
So even if you do use a tool like the java BigDecimal class, you will still probably need to find or build tools to compute special functions on those numbers. This was the part that cost most of my time.
Are such computations, done in thousands of digits, a good use of CPU time? Only you know. Personally, it was a great deal of fun to write a tool like that.
You will need to implement them yourself. For trig functions, you will want to read up on Taylor series.
However, I doubt this will be practical for thousands of digits. Do you really need that much precision? Generally, if precision like this is really required, you're probably better off not evaluating functions, especially transcendental functions, and working with them symbolically instead.
At the very least, you'll need an arbitrary precision floating point numbers. You could use BigIntegers for this (one for exponent, one for the mantissa). Rational numbers won't be practical.
Is there a library for decimal calculation, especially the Pow(decimal, decimal) method? I can't find any.
It can be free or commercial, either way, as long as there is one.
Note: I can't do it myself, can't use for loops, can't use Math.Pow, Math.Exp or Math.Log, because they all take doubles, and I can't use doubles. I can't use a serie because it would be as precise as doubles.
One of the multipliyers is a rate : 1/rate^(days/365).
The reason there is no decimal power function is because it would be pointless to use decimal for that calculation. Use double.
Remember, the point of decimal is to ensure that you get exact arithmetic on values that can be exactly represented as short decimal numbers. For reasonable values of rate and days, the values of any of the other subexpressions are clearly not going to be exactly represented as short decimal values. You're going to be dealing with inexact values, so use a type designed for fast calculations of slightly inexact values, like double.
The results when computed in doubles are going to be off by a few billionths of a penny one way or the other. Who cares? You'll round out the error later. Do the rate calculation in doubles. Once you have a result that needs to be turned back into a currency again, multiply the result by ten thousand, round it off to the nearest integer, convert that to a decimal, and then divide it out by ten thousand again, and you'll have a result accurate to four decimal places, which ought to be plenty for a financial calculation.
Here is what I used.
output = (decimal)Math.Pow((double)var1, (double)var2);
Now I'm just learning but this did work but I don't know if I can explain it correctly.
what I believe this does is take the input of var1 and var2 and cast them to doubles to use as the argument for the math.pow method. After that have (decimal) in front of math.pow take the value back to a decimal and place the value in the output variable.
I hope someone can correct me if my explination is wrong but all I know is that it worked for me.
I know this is an old thread but I'm putting this here in case someone finds it when searching for a solution.
If you don't want to mess around with casting and doing you own custom implementation you can install the NuGet DecimalMath.DecimalEx and use it like DecimalEx.Pow(number,power).
Well, here is the Wikipedia page that lists current C# numerics libraries. But TBH I don't think there is a lot of support for decimals
http://en.wikipedia.org/wiki/List_of_numerical_libraries
It's kind of inappropriate to use decimals for this kind of calculation in general. It's high precision yes - but it's also low range. As the MSDN docs state it's for financial/monetary calculations - where there isn't much call for POW unfortunately!
Of course you might have a specific problem domain that needs super high precision and all numbers are within 10(28) - 10(-28). But in that case you will probably just need to write your own series calculator such as the one linked to in the comments to the question.
Not using decimal. Use double instead. According to this thread, the Math.Pow(double, double) is called directly from CLR.
How is Math.Pow() implemented in .NET Framework?
Here is what .NET Framework 4 has (2 lines only)
[SecuritySafeCritical]
public static extern double Pow(double x, double y);
64-bit decimal is not native in this 32-bit CLR yet. Maybe on 64-bit Framework in the future?
wait, huh? why can't you use doubles? you could always cast if you're using ints or something:
int a = 1;
int b = 2;
int result = (int)Math.Pow(a,b);