I am using MVC3 with c# and I'm trying to get the percentage from the following in my model:
I retrieve the numbers:
... Code omitted
AgeGroup = g.Key.AgeGroup,
Count = (int)g.Count(),
Total = (int)
(from vw_masterview0 in ctx.vw_MasterViews
select new
{
vw_masterview0.ClientID
}).Count()
... Code omitted
I need to divide:
percent = Count/Total * 100
I have no idea how to format this in Linq.
You first need to cast to decimal or double to avoid integer division. After multiplying with 100 and rounding, you need to cast back to int.
The casts of the Count()s to int on the other hand are useless, Count() already returns an integer.
int count = g.Count();
int total = ctx.vw_MasterViews.Count();
int percent = (int)Math.Round((Decimal)count/(Decimal)total*100, MidpointRounding.AwayFromZero);
Related
Here is the code which made me post this question.
// int integer;
// int fraction;
// double arg = 110.1;
this.integer = (int)(arg);
this.fraction = (int)((arg - this.integer) * 100);
The variable integer is getting 110. That's OK.
The variable fraction is getting 9, however I am expecting 10.
What is wrong?
Update
It seems I have discovered that the source of the problem is subtraction
arg - this.integer
Its result is 0.099999999999994316.
Now I am wondering how I should correctly subtract so that the result was 0.1.
You have this:
fraction = (int)((110.1 - 110) * 100);
The inner part ((110.1 - 110) * 100), will be 9.999999
When you cast it to int, it will be round off to 9
This is because of "floating point" (see here) limitations:
Computers always need some way of representing data, and ultimately
those representations will always boil down to binary (0s and 1s).
Integers are easy to represent, but non-integers are a bit more
tricky. Consider the following var:
double x = 0.1d;
The variable x will actually store the closest available double to
that value. When you understand this, it becomes obvious why some
calculations seem to be "wrong".
If you were asked to add a third to a third, but could only use 3
decimal places, you'd get the "wrong" answer: the closest you could
get to a third is 0.333, and adding two of those together gives 0.666,
rather than 0.667 (which is closer to the exact value of two thirds).
Update:
In financial applications or where the numbers are so important to be exact, you can use decimal data type:
(int)((110.1m - 110) * 100) //will be 10 (m is decimal symbol)
or:
decimal arg = 110.1m;
int integer = (int)(arg); //110
decimal fraction = (int)((arg - integer) * 100); //will be 10
It is because you are using double, precision gets rounded, if you want it to be 10 use decimal type:
check the following:
int integer;
int fraction;
decimal arg = 110.1M;
integer = (int)(arg);
decimal diff = arg - integer;
decimal multiply = diff * 100;
fraction = (int)multiply;//output will be 10 as you expect
This is blowing my mind. I have this class with the following properties:
public IEnumerable<QuestionModel> Questions { get; set; }
public int TotalQuestions
{
get
{
return Questions.Count();
}
}
public int TotalCorrect
{
get
{
return Questions.Count( x => x.Correct );
}
}
public int Score
{
get
{
return ( TotalCorrect / TotalQuestions ) * 100;
}
}
Here's how I create the model in the controller:
var model = new QuizModel
{
Questions = new List<QuestionModel>
{
new QuestionModel
{
Correct = true
},
new QuestionModel
{
Correct = false
}
}
};
TotalQuestions is equal to 2. TotalCorrect is equal to 1. But Score is always 0.
I thought maybe Score was set before the other properties were set, so I tried this:
public int Score()
{
return ( TotalCorrect / TotalQuestions ) * 100;
}
I figured this would work because by the time I called Score() in the view the other properties would be set for sure. But it just returns 0.
I also tried changing the IEnumerable to an IList. No luck there.
This is blowing my mind.
Dude. Chill. It's all good.
TotalQuestions is equal to 2. TotalCorrect is equal to 1. But Score is always 0.
Well, do the math yourself. What integer is closest to 1 / 2, rounding towards zero? Obviously zero. What is zero multiplied by 100? Obviously zero. So the answer is zero.
The problem is that you're using all-integer arithmetic. Integer division rounds off to the nearest integer which is always zero in your scenario -- unless the number of correct answers exactly equals the total number of questions, in which case it is one.
To fix the problem here are two techniques,
First, you could multiply by 100 first and then do the division.
return ( 100 * TotalCorrect ) / TotalQuestions;
Now we multiply 100 by 1, get 100, divide that by 2, get 50, done.
Or you could cast one of the integers to a decimal, do the computation in decimals, and then cast it back to integer at the end:
public int Score()
{
return (int)(( (decimal)TotalCorrect / TotalQuestions ) * 100);
}
Now we convert 1 to 1.0m, divide by 2 to get 0.5m, and multiply by 100 to get 50.0m. Then convert that to int to get 50.
Note: use decimal and not double. You are less likely to run into strange rounding errors if you do. Remember, decimal accurately represents fractions where the denominator contains any combination of powers of two and five; double only accurately represents fractions where the denominator is a power of two.
Should you ever wish to allow a non-integer score, the latter algorithm is probably better.
Your dividing an integer by an integer, so the result is an integer. Since the result is 0.5, as an integer that is 0.
Just cast either operand to a double (or decimal) first:
( TotalCorrect / (double)TotalQuestions ) * 100;
You are dividing integers, and result is getting truncated to zero. Convert the first to float or double.
public int Score()
{
return (int)(((float)TotalCorrect / TotalQuestions ) * 100);
}
I have a database with a table, containing two integer fields.
When I try to get a percentage (fieldA / (fieldA+fieldB)), the value is 0.
double percentage = fieldA+fieldB // WORKS; 5+5=10
double percentage = fieldA / (fieldA+fieldB) // DOES NOT WORK; 5+5=0
So what's the problem here? Thanks..
When you do fieldA / (fieldA+fieldB) you get 5/10, which as an integer is truncated to 0, you have to do double division if you want 0.5 as a result.
i.e. something like this:
double percentage = (double)fieldA/(fieldA+fieldB)
I do assume fieldA and fieldB are integers? If yes, you should be aware that dividing two integers results in an integer, too, no matter what is on the left side of the equation.
So dividing fieldA by (fieldA+fieldB) results in a value < 1 which results in zero. See Integer Division on Wikipedia.
To correct the issue, simply cast at least one operand to a floating point type like e.g.:
double percentage = fieldA/(double)(fieldA+fieldB)
Since fieldA and fieldB are integers, the expression fieldA / (fieldA+fieldB)is of the form int / int which means you will use integer division, in this case - 5/10 = 0, as integer division solves x = am + b, and in this case 5 = a*10 + b which means a = 0, b = 5
You can do however:
double percentage = (double)fieldA / (fieldA+fieldB);
You may need to cast the fields as doubles before applying the operation
try this : double percentage = (double)fieldA/(double)(fieldA+fieldB)
if both fields are integer, then the addition of the two integer results also integer.
Try this
float result = (float)((A*100)/(B+A));
Answer: result = 50.0
I have an integer (representing seconds) which I'm converting to hours by dividing by 3600. I then store the value in a property (type int). If the value contains a decimal point, I convert it by casting. However, when I try to assign the value to the property, I get an error: "Cannot implicitly convert type 'decimal' to 'int'." Here's my code:
var p = ((Hours.Duration) / 3600.0);
(Hours.Duration) = p;
However,
Hours.Duration = (Hours.Duration) / 3600
works fine, and rounds to an int. What am I doing wrong?
decimal p = ((Hours.Duration) / 3600);
(Hours.Duration) = p;
you are getting error because p is decimal and Hours.Duration is integer, You cannot assign decimal to int without explicit casting.
(Hours.Duration) = (int)p;
If Hours.Duration is integer, 3600 is also integer, then there will be an integer division, that is your decimal value will be lost. e.g. in integer division 7/2 =3. If you want the answer to be 3.5, then you need to have atleast one decimal number in the division i.e 7.0/2 = 3.5 OR 7/2.0 = 3.5.
Try:
Hours.Duration = Convert.ToInt32(p);
Don't define p as decimal. And the question is, if you want to include also partial hour (e.g. if the result for 4000/3600 would be 1 or 2). So you can write directly
Hours.Duration /= 3600;
or if you want count also partial hour
Hours.Duration = Hours.Duration / 3600 + ((Hours.Duration % 3600 > 0)?1:0);
or if you want correct rounding up
Hours.Duration = Hours.Duration / 3600 + ((Hours.Duration % 3600 >= 1800)?1:0);
You can use this code:
int vIn = 0;
double vOut = Convert.ToDouble(vIn);
Here is a very handy convert data type webpage for those of others: Convert decimal to int in C#
I have this function I am writing.
const int ProgressBarLength = 230;
foreach (TransactionDetail item in list)
{
var itemProgress =
((ProgressBarLength/item.PurchasesRequired) *
Convert.ToInt32(item.TransactionAmount));
item.ProgressBar = itemProgress > ProgressBarLength ? ProgressBarLength : itemProgress;
}
Now I have 2 TransactionDetails in my loop.
If item.PurchasesRequired = 500 and TransactionAmount = 199.0 the resulting value is 0. However, if item.PurchasesRequired = 5 and TransactionAmount = 94.0 it returns a valid result.
What am I doing wrong?
Is item.PurchasesRequired an int?
If so, your problem is integer division.
ProgressBarLength is an int, so 230/500 = 0.
Use float, double, or decimal (either in a cast or for your ProgressBarLength) to maintain your desired level of precision.
I'm guessing you should do a cast to double somewhere to have more precision in your divisions. When dividing an int by an int, you won't get a double as result.
Try the following:
double itemProgress = ( ((double)ProgressBarLength / item.PurchasesRequired )
* Convert.ToInt32( item.TransactionAmount ) );
It looks like you're performing integer division.
230 / 500 is zero in integer division, whereas 230 / 5 is 46.
You can force floating-point division by casting PurchasesRequired to a double. 230 / 500 is 0.46 in floating-point division, as you'd expect.
const int ProgressBarLength = 230;
foreach (TransactionDetail item in list)
{
var itemProgress = ((ProgressBarLength / (double)item.PurchasesRequired)
* Convert.ToInt32(item.TransactionAmount));
item.ProgressBar = Math.Min((int)itemProgress, ProgressBarLength);
}
I'm guessing TransactionDetail.PurchaseRequired is a field or property of type int.
ProgressBarLength/item.PurchasesRequired divides an in by an int which results in an int, not a float. in your first example, 230 / 500 does integer division and the result is, of course, 0.
You can either calculate the expression as a double, or do the multiplication first so that you don't lose relevant precision from the integer division.
var itemProgress = (double) ProgressBarLength / item.PurchasesRequired * item.TransactionAmount;
or
var itemProgress = ProgressBarLength * (int) item.TransactionAmount / item.PurchasesRequired));