Given X.Y, I want to get X and Y.
For instance, given 123.456 I want to get 123 and 456 (NOT 0.456).
I can do the following:
decimal num = 123.456M;
decimal integer = Math.Truncate(num);
decimal fractional = num - Math.Truncate(num);
// integer = 123
// fractional = 0.456 but I want 456
REF
As above-mentioned, using this method I will get 0.456, while I need 456. Sure I can do the following:
int fractionalNums = (int)((num - Math.Truncate(num)) * 1000);
// fracionalNums = 456
Additionally, this method requires knowing how many fractional numbers a given decimal number has so that you can multiply to that number (e.g., 123.456 has three, 123.4567 has four, 123.456789 has six, 123.1234567890123456789 has nineteen).
Few points to consider:
This operation will be executed millions of times; hence, performance is critical (maybe a bit-wise-based solution would do better);
Precision is critical, and no rounding is acceptable.
NOTE 1
For performance reasons, I am NOT interested in string manipulation-based approaches.
NOTE 2
The numbers in my question are of decimal type, hence methods that work for only decimal types and fail on float or double (due to floating point precision) are acceptable.
NOTE 3
Two sides of decimal (i.e., integer and fractional parts) can be considered two integers. Hence, 123.000456 is not an expected input; and even if it is given, it is acceptable to split it to 123 and 456 (because both sides are to be considered integers).
BitConverter.GetBytes(decimal.GetBits(num)[3])[2]; - number of digits after comma
long[] tens = new long[] {1, 10, 100, 1000, ...};
decimal num = 123.456M;
int iPart = (int)num;
decimal dPart = num - iPart;
int count = BitConverter.GetBytes(decimal.GetBits(num)[3])[2];
long pow = tens[count];
Console.WriteLine(iPart);
Console.WriteLine((long)(dPart * pow));
Decimal has a 96 bit mantissa, so a long is not good enough to get every possible value.
Define all (positive) powers of 10 defined for Decimal:
decimal mults[] = {1M, 1e1M, 1e2M, 1e3M, <insert rest here>, 1e27M, 1e28M};
Then, inside the loop you need to get the scale (the power of 10 by which the "mantissa" is divided to get the nominal value of the decimal):
int[] bits = Decimal.GetBits(n);
int scale = (bits[3] >> 16) & 31; // 567.1234 represented as 5671234 x 10^-4
decimal intPart = (int)n; // 567.1234 --> 567
decimal decPart = (n - intPart) * mults[scale]; // 567.1234 --> 0.1234 --> 1234
The easiest way is probably to convert the number to string.
Then take the substring after the decimal point, and convert it back to int.
Related
I have an interesting problem, I need to convert an int to a decimal.
So for example given:
int number = 2423;
decimal convertedNumber = Int2Dec(number,2);
// decimal should equal 24.23
decimal convertedNumber2 = Int2Dec(number,3);
// decimal should equal 2.423
I have played around, and this function works, I just hate that I have to create a string and convert it to a decimal, it doesn't seem very efficient:
decimal IntToDecConverter(int number, int precision)
{
decimal percisionNumber = Convert.ToDecimal("1".PadRight(precision+1,'0'));
return Convert.ToDecimal(number / percisionNumber);
}
Since you are trying to make the number smaller couldn't you just divide by 10 (1 decimal place), 100 (2 decimal places), 1000 (3 decimal places), etc.
Notice the pattern yet? As we increase the digits to the right of the decimal place we also increase the initial value being divided (10 for 1 digit after the decimal place, 100 for 2 digits after the decimal place, etc.) by ten times that.
So the pattern signifies we are dealing with a power of 10 (Math.Pow(10, x)).
Given an input (number of decimal places) make the conversion based on that.
Example:
int x = 1956;
int powBy=3;
decimal d = x/(decimal)Math.Pow(10.00, powBy);
//from 1956 to 1.956 based on powBy
With that being said, wrap it into a function:
decimal IntToDec(int x, int powBy)
{
return x/(decimal)Math.Pow(10.00, powBy);
}
Call it like so:
decimal d = IntToDec(1956, 3);
Going the opposite direction
You could also do the opposite if someone stated they wanted to take a decimal like 19.56 and convert it to an int. You'd still use the Pow mechanism but instead of dividing you would multiply.
double d=19.56;
int powBy=2;
double n = d*Math.Pow(10, powBy);
You can try create decimal explictly with the constructor which has been specially designed for this:
public static decimal IntToDecConverter(int number, int precision) {
return new decimal(Math.Abs(number), 0, 0, number < 0, (byte)precision);
}
E.g.
Console.WriteLine(IntToDecConverter(2423, 2));
Console.WriteLine(IntToDecConverter(1956, 3));
Outcome:
24.23
1.956
Moving the decimal point like that is just a function of multiplying/dividing by a power of 10.
So this function would work:
decimal IntToDecConverter(int number, int precision)
{
// -1 flips the number so its a fraction; same as dividing below
decimal factor = (decimal)Math.Pow(10, -1*precision)
return number * factor;
}
number/percisionNumber will give you an integer which you then convert to decimal.
Try...
return Convert.ToDecimal(number) / percisionNumber;
Convert your method like as below
public static decimal IntToDecConverter(int number, int precision)
{
return = number / ((decimal)(Math.Pow(10, precision)));
}
Check the live fiddle here.
Here is the code which made me post this question.
// int integer;
// int fraction;
// double arg = 110.1;
this.integer = (int)(arg);
this.fraction = (int)((arg - this.integer) * 100);
The variable integer is getting 110. That's OK.
The variable fraction is getting 9, however I am expecting 10.
What is wrong?
Update
It seems I have discovered that the source of the problem is subtraction
arg - this.integer
Its result is 0.099999999999994316.
Now I am wondering how I should correctly subtract so that the result was 0.1.
You have this:
fraction = (int)((110.1 - 110) * 100);
The inner part ((110.1 - 110) * 100), will be 9.999999
When you cast it to int, it will be round off to 9
This is because of "floating point" (see here) limitations:
Computers always need some way of representing data, and ultimately
those representations will always boil down to binary (0s and 1s).
Integers are easy to represent, but non-integers are a bit more
tricky. Consider the following var:
double x = 0.1d;
The variable x will actually store the closest available double to
that value. When you understand this, it becomes obvious why some
calculations seem to be "wrong".
If you were asked to add a third to a third, but could only use 3
decimal places, you'd get the "wrong" answer: the closest you could
get to a third is 0.333, and adding two of those together gives 0.666,
rather than 0.667 (which is closer to the exact value of two thirds).
Update:
In financial applications or where the numbers are so important to be exact, you can use decimal data type:
(int)((110.1m - 110) * 100) //will be 10 (m is decimal symbol)
or:
decimal arg = 110.1m;
int integer = (int)(arg); //110
decimal fraction = (int)((arg - integer) * 100); //will be 10
It is because you are using double, precision gets rounded, if you want it to be 10 use decimal type:
check the following:
int integer;
int fraction;
decimal arg = 110.1M;
integer = (int)(arg);
decimal diff = arg - integer;
decimal multiply = diff * 100;
fraction = (int)multiply;//output will be 10 as you expect
I need a double value to contain 2 digits after ".", such as 2.15, 20.15. If the input value is 3.125, then it should print an error message.
My code is:
private static bool isTwoDigits(double num)
{
return (num - Math.Floor(num)).ToString().Length <= 4;
}
If you input 2.15, then it will be 2.15 -2 = 0.15 <= 4 - which works. But when I change num to 20.15 it doesn't, because (num - Math.Floor(num)) here will return 0.14999999999.
Any other good ideas?
This is the nature of binary floating points number. Just like 1/3 can't be exactly written out as a finite decimal number, 0.1 can't be exactly represented by a finite binary expansion.
So depending on what you are trying to achieve exactly, you could:
If you are validating some string input (e.g. a textbox), you can process the information at the string level, e.g. with a RegEx.
You can store your numbers in the decimal datatype, which can store decimal values exactly.
You can do your computation on a double but you have to give yourself a tolerance. If you expect only 2 digits of precision, you can do something like Math.Abs(x - Math.Round(x, 2)) < 0.00000001). The definition of this tolerance margin depends on your use case.
If you're really worried about the number of decimal places, on a base-10 number, use decimal instead of double.
the decimal is for calculating financial calculations, and the reason it's called decimal in the first place is so that it can better handle base-10 calculations such as dollars and cents.
And you can also check if the number is 2 digits a bit more simply.
return num % 0.01m == 0.0m;
SO as has already been said, you can use regexp to ensure the entire format is correct.
But if you know there will only be 1 decimal because its already a number you can also just use String.IndexOf
eg
double foo = .... ;
string fooString = foo.ToString();
if (fooString.Length - fooString.IndexOf(".") != 3) => error.
(Its 3 because Length is max index + 1 )
I have a web application that will apply a percentage markup to a product, but the percentage will be specified by a user. For example, the user can indicate they want to mark up a product 5%, 9%, 23%, etc. My problem is, the product price will change as well, and in doing so, end up giving ugly values ($1462.72)
As a result, my users are hoping that we can round the value to the nearest 5\0 value. So if my marked up product price is $1462.72, it would round up to $1465. $1798.02 on the other hand would round up to an even $1800.
Using VB\C#, how can I go about rounding these values?
Thanks!
To round to an arbitrary modulus you can create a function like:
public decimal Round(decimal source, decimal modulus)
{
return (Math.Round(source / modulus) * modulus);
}
and use it in this way:
decimal rounded = Round(1798.02m , 5.0m); // yields 1800.0
decimal rounded = Round(1462.72m , 5.0m); // yields 1465.0
decimal rounded = Round(2481.23m , 5.0m); // yields 2480.0
Note that Math.Round by default rounds midpoint values to the closest even number (e.g. 1.5 and 2.5 would both "round" to 2. In your case, the effect is that any numbers that are exactly between a 0 and 5 number (i.e. 2.5, 7.5) would be rounded to the closest 10:
decimal rounded = Round(1697.50m , 5.0m); // yields 1700.0
decimal rounded = Round(1702.50m , 5.0m); // yields 1700.0
If you want to always round UP on the midpoint just specify that in Round:
return (Math.Round(source / modulus, MidpointRounding.AwayFromZero) * modulus);
You can use the modulus operator to calculate the adjustment needed.
decimal price = 1798.02;
decimal adjustment = price % 5.0;
if(adjustment != 0) //so we don't round up already round numbers
{
price = (price - adjustment) + 5;
}
This will bring it up to the next multiple of 5.
Given 2 values like so:
decimal a = 0.15m;
decimal b = 0.85m;
Where a + b will always be 1.0m, both values are only specified to 2 decimal places and both values are >= 0.0m and <= 1.0m
Is it guaranteed that x == total will always be true, for all possible Decimal values of x, a and b? Using the calculation below:
decimal x = 105.99m;
decimal total = (x * a) + (x * b);
Or are there cases where x == total only to 2 decimal places, but not beyond that?
Would it make any difference if a and b could be specified to unlimited decimal places (as much as Decimal allows), but as long as a + b = 1.0m still holds?
Decimal is stored as a sign, an integer, and an integer exponent for the number 10 that represents the decimal location. So long as your integral portion of the number (e.g. 105 in 105.99) is not sufficiently large, then a + b will always equal one. and the outcome of your equation (x * a) + (x * b) will always have the correct value for the four decimal places.
Unlike float and double, precision is not lost up to the size of the data type (128 bits)
From MSDN:
The Decimal value type represents decimal numbers ranging from
positive 79,228,162,514,264,337,593,543,950,335 to negative
79,228,162,514,264,337,593,543,950,335. The Decimal value type is
appropriate for financial calculations requiring large numbers of
significant integral and fractional digits and no round-off errors.
The Decimal type does not eliminate the need for rounding. Rather, it
minimizes errors due to rounding. For example, the following code
produces a result of 0.9999999999999999999999999999 rather than 1
decimal dividend = Decimal.One;
decimal divisor = 3;
// The following displays 0.9999999999999999999999999999 to the console
Console.WriteLine(dividend/divisor * divisor);
The maximum precision of decimal in the CLR is 29 significant digits. When you're using that kind of precision, you're really talking approximation especially if you do multiplication because that requires intermediate results that the CLR must be able to process (see also http://msdn.microsoft.com/en-us/library/364x0z75.aspx).
If you have x with 2 significant digits and, say, a with 20 significant digits, then x * a will already have a minimum precision of 22 digits, and possibly more may be needed for intermediate results.
If x always has only 2 significant digits and you can keep the number of significant digits in a and b low enough (say, 22 digits -- pretty good and probably far enough away from 27 to deal with rounding errors), then I suppose (x * a) + (x * b) should be a pretty precise calculation always.
Finally, whether a + b always makes up 1.0m is of no significance related to a and b's individual precisions.