Need to compute correct output [closed] - c#

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 15 days ago.
The community reviewed whether to reopen this question 15 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I am coding in C#
This code is in the Main()
double rate = 0.10;
double surge = 0.25;
int phoneBill = 75;
double totalAmount = phoneBill + rate + surge;
WriteLine("New Phone Bill");
WriteLine("Your new Phone Bill is $" + totalAmount);
ReadKey();
I am trying to calculate the correct Phone Bill Charge by adding rate and surge to phonebill as follows
phoneBill + rate + surge should equal/output 103.13 but for some reason the output I get is 75.35... How can I fix the output so it outputs 103.13?
Tried addition, multiplcation and division symbols within double totalAmount = phoneBill + rate + surge; also tried phonebill + (rate + surge) and (phonebill + rate) + surge with multiplication, addition, division symbols and still didn't compute 103.13.. how to fix this error?

If you're trying to add a 25% surcharge, then you should multiply by 125%, not 25%. Multiplying by 25% is actually a 75% discount.
Change rate to 1.1 (representing 110%) and surge to 1.25 (representing 125%) and multiply. The result is 103.125, so once you round to two decimal places you'll get the right answer.

Related

how can i round up nearest 10 using rdlc report c# [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
=Round(56,10)
//iwant to be 60enter link description here
I'm truthfully not terribly familiar with RDLC Reports, but in "pure" C# you can just divide by 10.0, round up, and multiply by 10:
Math.Ceiling(56 / 10.0) * 10
because 56 / 10.0 = 5.6, Math.Ceiling(5.6) = 6, and 6 * 10 = 60.
Note that it's actually important that you divide by 10.0 (rather than 10) so that the compiler "knows" that you're doing floating-point division (rather than integer division).
Hopefully this'll get you started in the right direction.

Determine total number of perimeter dots based on rectangle's length & width dots [C#] [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
awesome community. I'm new to this platform nonetheless I'll try being detailed, yet brief with my request.
The above image shows a rectangle of length 5 units (as dots) & 4 dots width. Now, is there a way to get the total number of dots used to cover the perimeter of the rectangle in C#? One can visually understand there are a total of 14 dots, but I want the program to determine them 14 dots by discerning the 5 dots length & 4 dots width.
Using C# winform, I've placed a textbox for length & another for width. Another textbox as result to denote the total number of dots. No graphical work or anything fancy. Just calculation.
Thanks in advance :)
That is super easy math question.
Simply that is 2 * (first side length) + 2 * (second side length) - 4 (the four corners).
In your case 2 * 5 + 2 * 4 - 4 = 14
let's use W for width and H for height, so:
the total area is:
W x H = 20
the inner area is:
(W - 2) x (H - 2) = 6
so just deduct inner from total area:
(W x H) - ((W - 2) x (H - 2)) = 14

What relevance is 1.307 to the series 1 + 1/2 + 1/3 + 1/4... + 1/n [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 4 years ago.
Improve this question
I'm currently working my way through the Nakov book, Fundamentals of Computer Programming in C#. In Chapter 4 question 12 states:
Write a program that calculates the sum (with the precision of 0.001) of the following sequence: 1 + 1/2 - 1/3 + 1/4 - 1/5 + …
It seemed to me to be a relatively straightforward question. The series is a diminishing fraction that does not have an asymptote. Stopping the loop at a certain point due to diminished changes in value meets the precision requirements AFAIC. However, the solution given in both the Hungarian and English versions of the book makes reference to an obscure (to me) value of 1.307. As follows:
Accumulate the sum of the sequence in a variable inside a while-loop (see the chapter "Loops"). At each step compare the old sum with the new sum. If the difference between the two sums Math.Abs(current_sum – old_sum) is less than the required precision (0.001), the calculation should finish because the difference is constantly decreasing and the precision is constantly increasing at each step of the loop. The expected result is 1.307.
Can someone explain what this might mean?
Note that header contains "harmonic sequence" that has no limit.
But question body shows alternate sign sequence that converges towards value 2 - ln(2)
The expected result is 1.307.
I think they are simply saying what the result of the calculation is, so you can check your answer.
The sequence you've got
1 + 1/2 - 1/3 + 1/4 + ...
is the same as the Alternating Harmonic Series on Wikipedia, except with the signs from 1/2 onwards flipped:
1 - 1/2 + 1/3 - 1/4 + ... = ln 2
and the natural logarithm of 2, ln 2, = 0.693. Hence your 1.307 here = 2 - ln 2.

How to do Division in a Fractional Calculator? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I'm completely unsure on how to do it. I've searched but can't find a simple answer.
I've done the multiplication and I know its similar to it. Need some help. I want to know how to do division for two fractions.
My Multiply module:
{
answerDenominator = num1Denominator * num2Denominator; //Multiply both denominators
answerNumerator = ((num1Whole * num1Denominator) + num1Numerator) * //multiply the whole number by the denominator and add the numerator to it
((num1Whole * num2Denominator) + num2Numerator); //multiply the whole number by the second denominator, then add the second numerator, multiply these two answers together
answerWhole = answerNumerator / answerDenominator;
answerNumerator = answerNumerator % answerDenominator;
}
Let that we have to make the following division:
(a/b):(c/d)
This is equal to
(a/b)*(d/c)
That being said the division can simply be done like below:
static double CalculateDivisionResult(double a, double b, double c, double d)
{
return (a/b)*(d/c);
}
In the above:
a is the num1Numerator.
b is the num1Denominator.
c is the num2Numerator.
d is the num2Denominator.
The most important thing that you should pay attention on the above is the fact that we use double. Why we do so?
Let that a=3, b=7, c=4 and d=5:
Then
(a/b)*(d/c) = 15/28
If you had chosen to represent your number as integers, int a=3, then the above would be obvious 0. Representing them as doubles we can overcome this.

why c# doesn't show the changes in variable? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I'd tried this code by these values:
float timeStamp;
a=1338526801
b=113678
timeStamp = a +( b / 1000000);
then I changed the b to 113680 and calculated the timeStamp,
timeStamp = a+ (b / 1000000) ;
in real the timeStamp should change because the b has been changed, but when I print it by console.writeline(), the timeStamp value doesn't change.I think it refers to the precision of the c# values, but I don't know how to resolve it.
You should take a look at Floating-Point Types Table (C# Reference) which gives the following info
> Type Approximate range Precision
> float ±1.5e−45 to ±3.4e38 7 digits
> double ±5.0e−324 to ±1.7e308 15-16 digits
Your combination of 338526801 + 113678/1000000 is about 16 digits and would better fit into a double.
A float which contains 7 digits would get you accuracy to 338526800.000000 and no more
float f = 338526801 + 113678f/1000000
System.Diagnostics.Debug.Print(f.ToString("F6")); // results in 338526800.000000
however a double gets 15-16 digits can actually store the data to your precision.
double d = 338526801d + 113678d/1000000
System.Diagnostics.Debug.Print(d.ToString("F6")); // results in 338526801.113678
You could also look at Timespan and DateTime which give you accuracy to 100-nanosecond units. Since there are 10 ticks in a micro-second (us), the same TimeSpan would be:
TimeSpan time = new TimeSpan(3385268011136780);
One of the comments suggested you might be trying to convert Unix Time. If so then you can add the Timespan to the proper DateTime representing 1/1/1970.

Categories

Resources