Rounding in C# to 2 decimal places from higher precision number - c#

How would I round the following number in C# to obtain the following results.
Value : 500.0349999999
After Rounding to 2 digits after decimal : 500.04
I have tried Math.Round(Value,2, MidpointRounding.AwayFromZero); //but it returns the value 500.03 instead of 500.04

You're asking for non-standard rounding rules. The value 500.03499999999 rounded to the nearest hundredth should be 500.03. Since the thousandths digit is less than 5, the hundredths digit remains unchanged.
One way I can see to achieve your desired result is to round the number to the decimal place one smaller than what you ultimately want. Then round that result to the precision you want.
In your example, you would round the value to 3 decimal places resulting in 500.035. You would then round that to 2 decimal places which should result in 500.04 (assuming you're using MidpointRounding.AwayFromZero.
Hope that helps.

You could make it needlessly complex and wrap a variable Round(Decimal, Int32) function in a for loop. Decrement the Int32 to work it's way back to the decimal precision needed. It's a bit of work but like Eric said, you are asking for non-standard rounding rules.
Extra details: http://msdn.microsoft.com/en-us/library/ms131274.aspx

Related

Convert a number string to a rounded decimal (C#)

Sorry for the daft question, but I get back this value from database
"7.545720553985866E+29"
I need to convert this value to a decimal, rounded to 6 digits. What is the best way to do that? I tried
var test = double.Parse("7.545720553985866E+29");
test = Math.Round(test, 6);
var test2 = Convert.ToDecimal(test);
but the value remains unchanged and the conversion crashes.
Math.Round rounds to N digits to the right of the decimal point. Your number has NO digits to the right of the decimal (it is equivalent to 754,572,055,398,586,600,000,000,000,000), so rounding it does not change the value.
If you want to round to N significant digits then look at some of the existing answers:
Round a double to x significant figures
Rounding the SIGNIFICANT digits in a double, not to decimal places
the conversion crashes.
That's because the value is too large for a decimal. The largest value a decimal can hold is 7.9228E+28 - your value is about 10 times larger than that.
Maybe you can substring it and then after, parse.
var test= "7.545720553985866E+29".Substring(0,8); // 7.545720
test = Math.Round(test, 6);
var test2 = Convert.ToDecimal(test);
You can use this to round to 6 significant digits:
round(test, 6 - int(math.log10(test)))
The resulting value from that is
7.545721e+29
This works by using log10 from the math module to get the power of 10 in test, rounds it down to get an integer, subtracts that from 6 then uses round to get the desired digits.
As noted by others, round works to the given number of decimal places. The log10 and the rest figures how many decimal places are needed to get the desired number of significant digits. If the decimal places are negative, round rounds to the left of the decimal point.
You should be aware that log10 is not perfectly accurate and taking the int of that may be off from the expected value by one. This happens rarely but it does happen. Also, even if the computed value is correct, converting the value to string (such as when you print it) may give a different-than-expected result. If you need perfect accuracy you would be better off working from the string representation of the value.

Weird behaviour of decimal.Round

When using decimal, why the behaviour of rounding is always the same?
No matter if I use MidpointRounding.AwayFromZero or not it always gives 1.04. In the first case, shouldn't the output be 1.03?
Console.WriteLine(decimal.Round(1.035m, 2));
Console.WriteLine(decimal.Round(1.035m, 2, MidpointRounding.AwayFromZero));
https://github.com/dotnet/corefx/blob/664d98b3dc83a56e1e6454591c585cc6a8e19b78/src/Common/src/CoreLib/System/Decimal.cs#L612
https://github.com/dotnet/corefx/blob/61d792e202d039c304c4f04ad816a57688f32fd4/src/Common/src/CoreLib/System/Decimal.DecCalc.cs#L2429-L2444
No:
This method [Round(decimal d, int decimals)] is equivalent to calling the Round(Decimal, Int32, MidpointRounding) method with a mode argument of MidpointRounding.ToEven.
When d is exactly halfway between two rounded values, the result is the rounded value that has an even digit in the far right decimal position. For example, when rounded to two decimals, the value 2.345 becomes 2.34 and the value 2.355 becomes 2.36.
So when rounding 1.035 to even, it becomes 1.04 because 4 is even and 3 is not.
The default rounding method is MidpointRounding.ToEven, so when choosing whether to round to either 1.03 or 1.04, it chooses the one with the even number at the end, 1.04.
As the MSDN said:
public static decimal Round(
decimal d,
int decimals
)
decimals : A value from 0 to 28 that specifies the number of decimal places to
round to.
You want to round at 2 places, it must 1.04
This is seems expected behavior to me when rounding up a decimal.
Ex:
1.035 => 1.040 produce with two decimal place 1.04
1.033 => 1.030 produce with two decimal place 1.03
Isn't the default rounding to round up? in this case the 5 gets rounded up and carried over...
By default, the Round method uses the rounding to nearest convention. The following table lists the overloads of the Round method and the rounding convention that each uses.
https://learn.microsoft.com/en-us/dotnet/api/system.math.round?view=netframework-4.7.2

Math.Round returning a rounded up for odd values but rounds down for even

I am trying to found a float using math round
I found the following
0.5 --> 0
1.5 --> 2
2.5 --> 2
3.5 --> 4
and so on.
I believe this is due to floating point error, but not quite sure how.
How can I get around this so even numbers round properly?
From documentation;
The integer nearest a. If the fractional component of a is halfway
between two integers, one of which is even and the other odd, then the
even number is returned. Note that this method returns a Double
instead of an integral type.
Math.Round method has some overloads that takes MidpointRounding as a parameter which you can specify the rounding value if it is midway between two numbers.
AwayFromZero
When a number is halfway between two others, it is rounded toward the
nearest number that is away from zero.
ToEven
When a number is halfway between two others, it is rounded toward the
nearest even number.
You could use this one, to overcome that you have stated:
Math.Round(value, MidpointRounding.AwayFromZero);
Using the above:
When a number is halfway between two others, it is rounded toward the
nearest number that is away from zero.
For further documentation about the MidpointRounding enumeration, please have a look here.
You may try like this
Math.Round(value, MidpointRounding.AwayFromZero);
From MSDN
If the fractional component of a is halfway between two integers, one
of which is even and the other odd, then the even number is returned.
Also to mention one important point which I think is good to mention is that Microsoft has followed the IEEE 754 standard. This is also mentioned in MSDN for Math.Round under Remarks which says:
Round to nearest, ties to even – rounds to the nearest value; if the number falls midway it is rounded to the nearest value with an
even (zero) least significant bit, which occurs 50% of the time; this
is the default for binary floating-point and the recommended default
for decimal.
Round to nearest, ties away from zero – rounds to the nearest value; if the number falls midway it is rounded to the nearest value
above (for positive numbers) or below (for negative numbers); this is
intended as an option for decimal floating point.
This is known as bankers rounding (round to even). You can read more about it here. This is a .NET Framework feature and is working as designed.

Math.Round vs String.Format

I need double value to be rounded to 2 digits.
What is preferrable?
String.Format("{0:0.00}", 123.4567); // "123.46"
Math.Round(123.4567, 2) // "123.46"
Math.Round(double,digits) with digits>0 is conceptually very unclean. But I think it should never be used. double is a binary floating point number and thus has no well-defined concept of decimal digits.
I recommend using string.Format, or just ToString("0.00") when you only need to round for decimal display purposes, and decimal.Round if you need to round the actual number(for example using it in further calculations).
Note: With decimal.Round you can specify a MidpointRounding mode. It's common to want AwayFromZero rounding, not ToEven rounding.
With ToEven rounding 0.005m gets rounded to 0.00 and 0.015 gets rounded to 0.02. That's not what most people expect.
Comparisons:
ToEven: 3.75 rounds to 3.8
ToEven: 3.85 rounds to 3.8 (That's not what most people expect)
AwayFromZero: 3.75 rounds to 3.8
AwayFromZero: 3.85 rounds to 3.9
for more information see: https://msdn.microsoft.com/en-us/library/system.math.round.aspx
They are different functions, if you need the output to be displayed, use the first one (that also forces decimals to appear). You will avoid the overhead of the inevitable .ToString() that will occur if the variable is of type double.
Note that the second one rounds the number but if it's an integer result, you will get just the integer (ie: 7 vs 7.00)
That depends on what you want to do with it.
String.Format will return a string, Math.Round(double) will return a double.
the former outputs a string, the latter a double. What's your use of the result ? The answer of this will give the answer of your question.
If you want to return this value as a string then String.Format is better and if you want to return this value as a double in that case Math.Round is better. It totally depends on your requirement.
Math.Round will not add any decimal places if there aren't any to begin with. String.Format will.
e.g.: Math.Round(2) returns 2;
String.Format("{0:0.00}",2) returns 2.00;

How to decide what to use - double or decimal? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
decimal vs double! - Which one should I use and when?
I'm using double type for price in my trading software.
I've noticed that sometimes there are a odd errors.
They occur if price contains 4 digits after "dot", like 2.1234.
When I sent from my program "2.1234" on the market order appears at the price of "2.1235".
I don't use decimal because I don't need "extreme" precision. I don't need to distinguish for examle "2.00000000003" from "2.00000000002". I need maximum 6 digits after a dot.
The question is - where is the line? When to use decimal?
Should I use decimal for any finansical operations? Even if I need just one digit after the dot? (1.1 1.2 etc.)
I know decimal is pretty slow so I would prefer to use double unless decimal is absolutely required.
Use decimal whenever you're dealing with quantities that you want to (and can) be represented exactly in base-10. That includes monetary values, because you want 2.1234 to be represented exactly as 2.1234.
Use double when you don't need an exact representation in base-10. This is usually good for handling measurements, because those are already approximations, not exact quantities.
Of course, if having or not an exact representation in base-10 is not important to you, other factors come into consideration, which may or may not matter depending on the specific situation:
double has a larger range (it can handle very large and very small magnitudes);
decimal has more precision (has more significant digits);
you may need to use double to interact with some older APIs that are not aware of decimal;
double is faster than decimal;
decimal has a larger memory footprint;
When accuracy is needed and important, use decimal.
When accuracy is not that important, then you can use double.
In your case, you should be using decimal, as its financial matter.
For financial operation I always use the decimal type
Use decimal it's built for representing powers of 10 well (i.e. prices).
Decimal is the way to go when dealing with prices.
If it's financial software you should probably use decimal. This wiki article summarises quite nicely.
A simple response is in this example:
decimal d = 0.3M+0.3M+0.3M;
bool ret = d == 0.9M; // true
double db = 0.3 + 0.3 + 0.3;
bool dret = db == 0.9; // false
the test with the double fails since 0.3 in its binary representation ( base 2 ) is periodic, so you loose precision the decimal is represented by BCD, so base 10, and you did not loose significant digit unexpectedly. The Decimal are unfortunately dramattically slower than double. Usually we use decimal for financial calculation, where any digit has to be considered to avoid tolerance, double/float for engineering.
Double is meant as a generic floating-point data type, decimal is specifically meant for money and financial domains. Even though double usually works just fine decimal might prevent problems in some cases (e.g. rounding errors when you get to values in the billions)
There is an Explantion of it on MSDN
As soon as you start to do calculations on doubles you may get unexpected rounding problems because a double uses a binary representation of the number while the decimal uses a decimal representation preserving the decimal digits. That is probably what you are experiencing. If you only serialize and deserialize doubles to text or database without doing any rounding you will actually not loose any precision.
However, decimals are much more suited for representing monetary values where you are concerned about the decimal digits (and not the binary digits that a double uses internally). But if you need to do complex calculations (e.g. integrals as used by actuary computations) you will have to convert the decimal to double before doing the calculation negating the advantages of using decimals.
A decimal also "remembers" how many digits it has, e.g. even though decimal 1.230 is equal to 1.23 the first is still aware of the trailing zero and can display it if formatted as text.
If you always know the maximum amount of decimals you are going to have (digits after the point). Then the best practice is to use fixed point notation. That will give you an exact result while still working very fast.
The simplest manner in which to use fixed point is to simply store the number in an int of thousand parts. For example if the price always have 2 decimals you would be saving the amount of cents ($12.45 is stored in an int with value 1245 which thus would represent 1245 cents). With four decimals you would be storing pieces of ten-thousands (12.3456 would be stored in an int with value 123456 representing 123456 ten-thousandths) etc etc.
The disadvantage of this is that you would sometimes need a conversion if for example you are multiplying two values together (0.1 * 0.1 = 0.01 while 1 * 1 = 1, the unit has changed from tenths to hundredths). And if you are going to use some other mathematical functions you also has to take things like this into consideration.
On the other hand if the amount of decimals vary a lot using fixed point is a bad idea. And if high-precision floating point calculations are needed the decimal datatype was constructed for exactly that purpose.

Categories

Resources