Decimal and double values are resolving to zero [duplicate] - c#

How do I divide two integers to get a double?

You want to cast the numbers:
double num3 = (double)num1/(double)num2;
Note: If any of the arguments in C# is a double, a double divide is used which results in a double. So, the following would work too:
double num3 = (double)num1/num2;
For more information see:
Dot Net Perls

Complementing the #NoahD's answer
To have a greater precision you can cast to decimal:
(decimal)100/863
//0.1158748551564310544611819235
Or:
Decimal.Divide(100, 863)
//0.1158748551564310544611819235
Double are represented allocating 64 bits while decimal uses 128
(double)100/863
//0.11587485515643106
In depth explanation of "precision"
For more details about the floating point representation in binary and its precision take a look at this article from Jon Skeet where he talks about floats and doubles and this one where he talks about decimals.

cast the integers to doubles.

Convert one of them to a double first. This form works in many languages:
real_result = (int_numerator + 0.0) / int_denominator

var firstNumber=5000,
secondeNumber=37;
var decimalResult = decimal.Divide(firstNumber,secondeNumber);
Console.WriteLine(decimalResult );

var result = decimal.ToDouble(decimal.Divide(5, 2));

I have went through most of the answers and im pretty sure that it's unachievable. Whatever you try to divide two int into double or float is not gonna happen.
But you have tons of methods to make the calculation happen, just cast them into float or double before the calculation will be fine.

The easiest way to do that is adding decimal places to your integer.
Ex.:
var v1 = 1 / 30 //the result is 0
var v2 = 1.00 / 30.00 //the result is 0.033333333333333333

In the comment to the accepted answer there is a distinction made which seems to be worth highlighting in a separate answer.
The correct code:
double num3 = (double)num1/(double)num2;
is not the same as casting the result of integer division:
double num3 = (double)(num1/num2);
Given num1 = 7 and num2 = 12:
The correct code will result in num3 = 0.5833333
Casting the result of integer division will result in num3 = 0.00

Related

how to leave decimal places to 1 without rounding

I need to convert my value 2.8634 to 2.8. I tried the following ,
var no = Math.Round(2.8634,2,MidpointRounding.AwayFromZero)
I'm getting 2.87.
Suggest me some ideas how to convert.
Thanks
This might do the trick for you
decimal dsd = 2.8634m;
var no = Math.Truncate(dsd * 10) / 10;
Math.Truncate calculates the integral part of a specified decimal number. The number is rounded to the nearest integer towards zero.
You can also have a look on the difference between Math.Floor, Math.Ceiling, Math.Truncate, Math.Round with an amazing explanation.
Use this one.Hope this will work for you.
var no = Math.Round(2.8634,1,MidpointRounding.AwayFromZero)
It's a tad more cryptic (but more efficient) than calling a Math method, but you can simply multiply the value by 10, cast to an integer (which effectively truncates the decimal portion), and then divide by 10.0 (or 10d/10f, all just to ensure we don't get integer division) to get back the value you are after. I.e.:
float val = 2.8634;
val = ((int)(val * 10)) / 10.0;

C# double, decimal problems

Why does this calcuation: double number = (13 /(13+12+13))
equals 0?
It should be around 0.34, I think!
Thanks!
Because you are dividing an int with an int. You should be doing
double number = (13.0 /(13.0+12.0+13.0));
That are integers. So it does integer division. And thus truncates to the next lower(closer to 0) integer.
Add a .0 to a number like 13.0 to make it a double.
Because you're using all INT in your formula - it will be treated as INT for the result too.
Try this instead:
var result = 13.0 / (13.0 + 12.0 + 13.0)
and your result will be:
0.34210526315789475
Try adding a .0:
(13.0 /(13+12+13))
Otherwise you're dealing with integers.
Another option is to cast one of the arguments explicitly to double and thus forcing the runtime to perform double division. e.g. :
double result = ((double)13 / (13 + 12 + 13));
Adding a ".0" will help:
double number = (13.0 /(13.0+12.0+13.0));

How to convert decimal to string value for dollars and cents separated in C#?

I need to display decimal money value as string, where dollars and cents are separate with text in between.
123.45 => "123 Lt 45 ct"
I came up with the following solution:
(value*100).ToString("#0 Lt 00 ct");
However, this solution has two drawbacks:
Upon showing this solution to a fellow programmer, it appears to be unintuitive and requires some explaining.
Cents are allways displayed as two digits. (Not real problem for me, as currently this is how I need it to be displayed.)
Is there any alternative elegant and simple solution?
This is a fairly simple operation. It should be done in a way, that your fellow programmers understand instantly. Your solution is quite clever, but cleverness is not needed here. =)
Use something verbose like
double value = 123.45;
int dollars = (int)value;
int cents = (int)((value - dollars) * 100);
String result = String.Format("{0:#0} Lt {1:00} ct", dollars, cents);
I had some errors with accepted answer above (it would drop my result one penny)
Here is my correction
double val = 125.79;
double roundedVal = Math.Round(val, 2);
double dollars = Math.Floor(roundedVal);
double cents = Math.Round((roundedVal - dollars), 2) * 100;
This might be a bit over the top:
decimal value = 123.45M;
int precision = (Decimal.GetBits(value)[3] & 0x00FF0000) >> 16;
decimal integral = Math.Truncate(value);
decimal fraction = Math.Truncate((decimal)Math.Pow(10, precision) * (value - integral));
Console.WriteLine(string.Format("{0} Lt {1} ct", integral, fraction));
The format of the decimal binary representation is documented here.

Possible Loss of Fraction

Forgive me if this is a naïve question, however I am at a loss today.
I have a simple division calculation such as follows:
double returnValue = (myObject.Value / 10);
Value is an int in the object.
I am getting a message that says Possible Loss of Fraction. However, when I change the double to an int, the message goes away.
Any thoughts on why this would happen?
When you divide two int's into a floating point value the fraction portion is lost. If you cast one of the items to a float, you won't get this error.
So for example turn 10 into a 10.0
double returnValue = (myObject.Value / 10.0);
You're doing integer division if myObject.Value is an int, since both sides of the / are of integer type.
To do floating-point division, one of the numbers in the expression must be of floating-point type. That would be true if myObject.Value were a double, or any of the following:
double returnValue = myObject.Value / 10.0;
double returnValue = myObject.Value / 10d; //"d" is the double suffix
double returnValue = (double)myObject.Value / 10;
double returnValue = myObject.Value / (double)10;
An integer divided by an integer will return your an integer. Cast either Value to a double or divide by 10.0.
Assuming that myObject.Value is an int, the equation myObject.Value / 10 will be an integer division which will then be cast to a double.
That means that myObject.Value being 12 will result in returnValue becoming 1, not 1.2.
You need to cast the value(s) first:
double returnValue = (double)(myObject.Value) / 10.0;
This would result in the correct value 1.2, at least as correct as doubles will allow given their limitations but that's discussed elsewhere on SO, almost endlessly :-).
I think since myObject is an int, you should
double returnValue=(myObject.Value/10.0);

Double Precision

I have a code, and I do not understand it. I am developing an application which precision is very important. but it does not important for .NET, why? I don't know.
double value = 3.5;
MessageBox.Show((value + 1 * Math.Pow(10, -20)).ToString());
but the message box shows: 3.5
Please help me, Thank you.
If you're doing anything where precision is very important, you need to be aware of the limitations of floating point. A good reference is David Goldberg's "What Every Computer Scientist Should Know About Floating-Point Arithmetic".
You may find that floating-point doesn't give you enough precision and you need to work with a decimal type. These, however, are always much slower than floating point -- it's a tradeoff between accuracy and speed.
You can have precision, but it depends on what else you want to do. If you put the following in a Console application:
double a = 1e-20;
Console.WriteLine(" a = {0}", a);
Console.WriteLine("1+a = {0}", 1+a);
decimal b = 1e-20M;
Console.WriteLine(" b = {0}", b);
Console.WriteLine("1+b = {0}", 1+b);
You will get
a = 1E-20
1+a = 1
b = 0,00000000000000000001
1+b = 1,00000000000000000001
But Note that The Pow function, like almost everything in the Math class, only takes doubles:
double Pow(double x, double y);
So you cannot take the Sine of a decimal (other then by converting it to double)
Also see this question.
Or use the Decimal type rather than double.
The precision of a Double is 15 digits (17 digits internally). The value that you calculate with Math.Pow is correct, but when you add it to value it just is too small to make a difference.
Edit:
A Decimal can handle that precision, but not the calculation. If you want that precision, you need to do the calculation, then convert each value to a Decimal before adding them together:
double value = 3.5;
double small = Math.Pow(10, -20);
Decimal result = (Decimal)value + (Decimal)small;
MessageBox.Show(result.ToString());
Double precision means it can hold 15-16 digits. 3.5 + 1e-20 = 21 digits. It cannot be represented in double precicion. You can use another type like decimal.

Categories

Resources