Is there a easy way to convert say 1.3333333 to say 1+1/3 in C#? There is code for just the fraction but not the whole number. Should I just pull that out before using the fraction?
I think it will be hard. Two reasons:
A fraction can always be converted to a decimal number, but a decimal sometimes looses digits, so a exact roundtrip would be impossible. For example: Which number qualifies to be converted to 1/3?:
0.33?
0.333333?
0.333333333333333333333333333333333333333333333333333333333333333333?
Each decimal number can be converted to multible fractions. For example: 0.25 can be converted to:
1/4
2/8
25/100
However, if you are open for some less exact solutions, try: Algorithm for simplifying decimal to fractions
You should set max denominator for example 1 000 000 (one milione) and try to shorten. You have to aware of could lose precision
For example you have 1,450
You take fraction part and multiple by your denominator, it is 450 000 / 1 000 000
Then you find "greatest common divisor"
Divide both part by divisor from previous point
Finally you get your fraction 9/20
Related
With Single precision (32 bits): the bits division goes like this :
So we have 23 bits of mantissa/Significand .
So we can represent 2^23 numbers (via 23 bits ) : which is 8388608 --> which is 7 digit long.
BUT
I was reading that the mantissa is normalized (the leading digit in the mantissa will always be a 1) - so the pattern is actually 1.mmm and only the mmm is represented in the mantissa.
for example : look here :
0.75 is represented but it's actually 1.75
Question #1
So basically it adds 1 more precision digit....no ?
If so then we have 8 Significand !
So why does msdn says : 7 ?
Question #2
In double there are 52 bits for mantissa. (0..51)
If I add 1 for the normalized mantissa so its 2^53 possibilites which is : 9007199254740992 ( 16 digits)
and MS does say : 15-16 :
Why is this inconsistency ? am I missing something ?
It doesn't add one more decimal digit - just a single binary digit. So instead of 23 bits, you have 24 bits. This is handy, because the only number you can't represent as starting with a one is zero, and that's a special value.
In short, you're not looking at 2 ^ 24 (that would be a decimal number, base-10) - you're looking at 2 ^ (-24). That's the most important difference between float-double and decimal. decimal is what you imagine floats to be, ie. a simple exponent-shifted, base-10 number. float and double aren't that.
Now, decimal digits versus binary digits is a tricky matter. You're mistaken in your understanding that the precision has anything to do with the 2 ^ 24 figure - that would only be true if you were talking about e.g. the decimal type, which actually stores decimal values as decimal point offsets of a normal (huge-ass) integer.
Just like 1 / 3 cannot be written in decimal (0.333333...), many simple decimal numbers can't be represented in a float precisely (0.2 is the typical example). decimal doesn't have a problem with that - it's just 2 shifted one digit to the right, easy peasy. For floats, however, you have to represent this value as a sum of negative powers of two - 0.5, 0.25, 0.125 ... The same would apply in the opposite direction if 2 wasn't a factor of 10 - every finite binary "decimal" can be represented with finite precision in decimal.
Now, in fact, float can easily represent a number with 24 decimal digits - it just has to be 2 ^ (-24) - a number you're not going to encounter in your usual day job, and a weird number in decimal. So where does the 7 (actually more like 7.22...) come from? Simple, just do a decimal logarithm of 2 ^ (-24).
The fact that it seems that 0.2 can be represented "exactly" in a float is simply because everytime you e.g. convert it to a string, you're rounding. So, even though the number isn't 0.2 exactly, it ends up that way when you convert it to a decimal number.
All this means that when you need decimal precision, you want to use decimal, as simple as that. This is not because it's a better base for calculations, it's simply because humans use it, and they will not be happy if your application gives different results from what they calculate on a piece of paper - especially when dealing with money. Accountants are very focused on having everything correct to the least significant digit.
Floats are used where it's not about decimal precision, but rather about generally having some sort of precision - this makes them well suited for physics calculations and similar, because you don't actually care about having the number come up the same in decimal - you're working with a given precision, and you're going to get that - 24 significant binary "decimals".
The implied leading 1 adds one more binary digit of precision, not decimal.
I use the decimal type for high precise calculation (monetary).
But I came across this simple division today:
1 / (1 / 37) which should result in 37 again
http://www.wolframalpha.com/input/?i=1%2F+%281%2F37%29
But C# gives me:
37.000000000000000000000000037M
I tried both these:
1m/(1m/37m);
and
Decimal.Divide(1, Decimal.Divide(1, 37))
but both yield the same results. How is the behaviour explainable?
Decimal stores the value as decimal floating point with only limited precision. The result of 1 / 37 is not precicely stored, as it's stored as 0.027027027027027027027027027M. The true number has the group 027 going indefinitely in decimal representation. For that reason, you cannot get the precise numbers in decimal representation for every possible number.
If you use Double in the same calculation, the end result is correct in this case (but it does not mean it will always be better).
A good answer on that topic is here: Difference between decimal, float and double in .NET?
Decimal data type has an accuracy of 28-29 significant digits.
So what you have to understand is when you consider 28-29 significant digits you are still not exact.
So when you compute a decimal value for (1/37) what you have to note is that at this stage you are only getting an accuracy of 28-29 digits. e.g 1/37 is 0.02 when you take 2 significant digits and 0.027 when you take 3 significant digits. Imagine you divide 1 with these values in each case. you get a 50 in first case and in second case you get 37.02...Considering 28-29 digits (decimal ) takes you to an accuracy of 37.000000000000000000000000037. If you have to get an exact 37 you simply need more than 28-29 significant digits than the decimal offers.
Always do computations with maximum significant digits and round off only your answer with Math.Round for desired result.
I'm trying to write an algorithm to find the minimum number of precision digits to show at least one significant figure using .Net string formatter.
eg.
Value Precision wanted:
----- -----------------
10 0
1 0
0.1 1
0.99 1
0.01 2
0.009 3
(don't care about further digits, only the first, hence 0.99 only need a precision of 1.)
The best I can come up with is:
int precision = (int)Math.Abs(Math.Min(0, Math.Floor(Math.Log10(value))));
This works fine but I can't help thinking there's a more elegant solution. Can any maths gurus help me out?
Slightly shorter:
int precision = (int)Math.Max(0, -Math.Floor(Math.Log10(value)));
A float is a binary representation of a fractional value. If your initial value is a single precision floating point number, then the number is multiplied by an exponent to get the value,
((f & 0x7f800000) >> 23)-127 .
If the exponent is non-negative, based on what you have said you would get back zero (you have numbers before the decimal point).
If the exponent is negative, well, that is annoying since binary digits don't line up well with decimal. It should be doable with a look up table, though. Check out https://math.stackexchange.com/questions/3987/how-to-calculate-the-number-of-decimal-digits-for-a-binary-number#4080.
Edit: you should read about the way floating point numbers are stored.
In the lunch break we started debating about the precision of the double value type.
My colleague thinks, it always has 15 places after the decimal point.
In my opinion one can't tell, because IEEE 754 does not make assumptions
about this and it depends on where the first 1 is in the binary
representation. (i.e. the size of the number before the decimal point counts, too)
How can one make a more qualified statement?
As stated by the C# reference, the precision is from 15 to 16 digits (depending on the decimal values represented) before or after the decimal point.
In short, you are right, it depends on the values before and after the decimal point.
For example:
12345678.1234567D //Next digit to the right will get rounded up
1234567.12345678D //Next digit to the right will get rounded up
Full sample at: http://ideone.com/eXvz3
Also, trying to think about double value as fixed decimal values is not a good idea.
You're both wrong. A normal double has 53 bits of precision. That's roughly equivalent to 16 decimal digits, but thinking of double values as though they were decimals leads to no end of confusion, and is best avoided.
That said, you are much closer to correct than your colleague--the precision is relative to the value being represented; sufficiently large doubles have no fractional digits of precision.
For example, the next double larger than 4503599627370496.0 is 4503599627370497.0.
C# doubles are represented according to IEEE 754 with a 53 bit significand p (or mantissa) and a 11 bit exponent e, which has a range between -1022 and 1023. Their value is therefore
p * 2^e
The significand always has one digit before the decimal point, so the precision of its fractional part is fixed. On the other hand the number of digits after the decimal point in a double depends also on its exponent; numbers whose exponent exceeds the number of digits in the fractional part of the significand do not have a fractional part themselves.
What Every Computer Scientist Should Know About Floating-Point Arithmetic is probably the most widely recognized publication on this subject.
Since this is the only question on SO that I could find on this topic, I would like to make an addition to jorgebg's answer.
According to this, precision is actually 15-17 digits. An example of a double with 17 digits of precision would be 0.92107099070578813 (don't ask me how I got that number :P)
How would I round the following number in C# to obtain the following results.
Value : 500.0349999999
After Rounding to 2 digits after decimal : 500.04
I have tried Math.Round(Value,2, MidpointRounding.AwayFromZero); //but it returns the value 500.03 instead of 500.04
You're asking for non-standard rounding rules. The value 500.03499999999 rounded to the nearest hundredth should be 500.03. Since the thousandths digit is less than 5, the hundredths digit remains unchanged.
One way I can see to achieve your desired result is to round the number to the decimal place one smaller than what you ultimately want. Then round that result to the precision you want.
In your example, you would round the value to 3 decimal places resulting in 500.035. You would then round that to 2 decimal places which should result in 500.04 (assuming you're using MidpointRounding.AwayFromZero.
Hope that helps.
You could make it needlessly complex and wrap a variable Round(Decimal, Int32) function in a for loop. Decrement the Int32 to work it's way back to the decimal precision needed. It's a bit of work but like Eric said, you are asking for non-standard rounding rules.
Extra details: http://msdn.microsoft.com/en-us/library/ms131274.aspx