Time complexity for an analysis recursive algorithm - c#

Any idea what the time complexity could be for this recursive algorithm please?
/* The function f(x) is unimodal over the range [min, max] and can be evaluated in Θ(1) time
* ε > 0 is the precision of the algorithm, typically ε is very small
* max > min
* n = (max - min)/ε, n > 0, where n is the problem size */
Algorithm(max, min){
if ((max - min) < ε){
return (max - min)/2 // return the answer
}
leftThird = (2 * min + max) / 3 // represents the point which is 1/3 of the way from min to max
rightThird = (min + 2 * max) / 3 // represents the point which is 2/3 of the way from min to max
if (f(leftThird) < f(rightThird))
{
return Algorithm(max,leftThird) // look for the answer in the interval between leftThird and max
}
else
{
return Algorithm(min, rightThird) // look for the answer in the interval between min and rightThird
}
}

Related

How to divide a decimal number into rounded parts that add up to the original number?

All Decimal numbers are rounded to 2 digits when saved into application. I'm given a number totalAmount and asked to divide it into n equal parts(or close to equal).
Example :
Given : totalAmount = 421.9720; count = 2 (totalAmount saved into application is 421.97)
Expected : 210.99, 210.98 => sum = 421.97
Actual(with plain divide) : 210.9860 (210.99), 210.9860 (210.99) => sum = 412.98
My approach :
var totalAmount = 421.972m;
var count = 2;
var individualCharge = Math.Floor(totalAmount / count);
var leftOverAmount = totalAmount - (individualCharge * count);
for(var i = 0;i < count; i++) {
Console.WriteLine(individualCharge + leftOverAmount);
leftOverAmount = 0;
}
This gives (-211.97, -210)
public IEnumerable<decimal> GetDividedAmounts(decimal amount, int count)
{
var pennies = (int)(amount * 100) % count;
var baseAmount = Math.Floor((amount / count) * 100) / 100;
foreach (var _ in Enumerable.Range(1, count))
{
var offset = pennies-- > 0 ? 0.01m : 0m;
yield return baseAmount + offset;
}
}
Feel free to alter this if you want to get an array or an IEnumerable which is not deferred. I updated it to get the baseAmount to be the floor value so it isn't recalculated within the loop.
Basically you need to find the base amount and a total of all the leftover pennies. Then, simply add the pennies back one by one until you run out. Because the pennies are based on the modulus operator, they'll always be in the range of [0, count - 1], so you'll never have a final leftover penny.
You're introducing a few rounding errors here, then compounding them. This is a common problem with financial data, especially when you have to constrain your algorithm to only produce outputs with 2 decimal places. It's worse when dealing with actual money in countries where 1 cent/penny/whatever coins are no longer legal tender. At least when working with electronic money the rounding isn't as big an issue.
The naive approach of dividing the total by the count and rounding the results is, as you've already discovered, not going to work. What you need is some way to spread out the errors while varying the output amounts by no more than $0.01. No output value can be more than $0.01 from any other output value, and the total must be the truncated total value.
What you need is a way to distribute the error across the output values, with the smallest possible variation between the values in the result. The trick is to track your error and adjust the output down once the error is high enough. (This is basically how the Bresenham line-drawing algorithm figures out when to increase the y value, if that helps.)
Here's the generalized form, which is pretty quick:
public IEnumerable<decimal> RoundedDivide(decimal amount, int count)
{
int totalCents = (int)Math.Floor(100 * amount);
// work out the true division, integer portion and error values
float div = totalCents / (float)count;
int portion = (int)Math.Floor(div);
float stepError = div - portion;
float error = 0;
for (int i = 0; i < count; i++)
{
int value = portion;
// add in the step error and see if we need to add 1 to the output
error += stepError;
if (error > 0.5)
{
value++;
error -= 1;
}
// convert back to dollars and cents for outputput
yield return value / 100M;
}
}
I've tested it with count values from 1 through 100, all outputs sum to match the (floored) input value exactly.
Try to break it down to steps:
int decimals = 2;
int factor = (int)Math.Pow(10, decimals);
int count = 2;
decimal totalAmount = 421.97232m;
totalAmount = Math.Floor(totalAmount * factor) / factor; // 421.97, you may want round here, depends on your requirement.
int baseAmount = (int)(totalAmount * factor / count); // 42197 / 2 = 21098
int left = (int)(totalAmount * factor) % count; // 1
// Adding back the left for Mod operation
for (int i = 0; i < left; i++)
{
Console.WriteLine((decimal)(baseAmount + 1) / factor); // 21098 + 1 / 100 = 210.99
}
// The reset that does not needs adjust
for (int i = 0; i < count - left; i++)
{
Console.WriteLine((decimal)baseAmount / factor); // 21098 / 100 = 210.98
}

Calculate the ticks of an axis for a chart with a stepsize

I've calculated a stepsize for an axis on a chart.
Also I have the Min and Max -Values. Now I need to calculate all ticks so that all values between my Min and Max can be displayed.
For Example:
Stepsize: 1000
Min: 213
Max: 4405
Expected ticks: 0,1000,2000,3000,4000,5000
Stepsize: 500
Min: -1213
Max: 1405
Expected ticks: -1500,-1000,-500,0,500,1000,1500
Until now I'm trying to calculate the first value with "try and error" like:
bool firstStepSet = false;
double firstStep = stepSize;
do
{
if (xValue >= (firstStep - (stepSize / 2)) && xValue <=
(firstStep + (stepSize / 2)))
{
firstStepSet = true;
this.myBarXValues.Add(firstStep, 0);
}
else if (xValue > stepSize)
{
firstStep += stepSize;
}
else
{
firstStep -= stepSize;
}
}
while (!firstStepSet);
And after that I'm adding steps to this list until all values fit.
This seems pretty dirty to me and I want to know if there is another solution.
So what I need is a solution which calculate the first tick that I need.
This function calculates first and last step values:
static void CalcSteps(int min, int max, int stepSize, out int firstStep, out int lastStep)
{
if (min >= 0)
{
firstStep = (min / stepSize) * stepSize;
}
else
{
firstStep = ((min - stepSize + 1) / stepSize) * stepSize;
}
if (max >= 0)
{
lastStep = ((max + stepSize - 1) / stepSize) * stepSize;
}
else
{
lastStep = (max / stepSize) * stepSize;
}
}
You can calculate axis limits using integer rounding to lower and higher values
low = stepsize * (min / stepsize) //integer division needed
high = stepsize * ((max + stepsize - 1) / stepsize)
Example Python code returns limits and number of ticks (one more than interval count)
def getminmax(minn, maxx, step):
low = (minn // step)
high = (maxx + step - 1) // step
ticks = high - low + 1
return low * step, high * step, ticks
print(getminmax(213, 4405, 1000))
print(getminmax(-1213,1405, 500))
(0, 5000, 6)
(-1500, 1500, 7)

Angle Normalization C#

I have an Angle class that has this constructor
public Angle(int deg, // Degrees, minutes, seconds
int min, // (Signs should agree
int sec) // for conventional notation.)
{
/* //Bug degree normalization
while (deg <= -180) deg += 360;
while (deg > Math.PI) deg -= 360;
//correction end */
double seconds = sec + 60 * (min + 60 * deg);
value = seconds * Math.PI / 648000.0;
normalize();
}
and I have these values for testing that constructor
int[] degrees = { 0, 180, -180, Int32.MinValue / 60, 120+180*200000};
int[] minutes = { 0, 0, 0, 0,56};
int[] seconds = { 0, 0, 0, 0,10};
Console.WriteLine("Testing constructor Angle(int deg, int min)");
for (int i = 0; i < degrees.Length; i++)
{
p = new Angle(degrees[i], minutes[i], seconds[i]);
Console.WriteLine("p = " + p);
}
/*Testing constructor Angle(int deg, int min)
p = 0°0'0"
p = 180°0'0"
p = 180°0'0"
p = 0°8'0" incorrect output
p = -73°11'50" incorrect output expected 120 56 10
*/
I do not understand why there is a bug here ? and why did they use divide Int32.MinValue by 60 and 120+180*200000 as this format ?
the comments in the constructor is a correction for the code
UPDATE: Added the code of normalize()
// For compatibility with the math libraries and other software
// range is (-pi,pi] not [0,2pi), enforced by the following function:
void normalize()
{
double twoPi = Math.PI + Math.PI;
while (value <= -Math.PI) value += twoPi;
while (value > Math.PI) value -= twoPi;
}
The problem is in this piece of code:
double seconds = sec + 60 * (min + 60 * deg);
Although you are storing seconds as a double, the conversion from int to double is taking place after sec + 60 * (min + 60 * deg) is computed as an int.
The compiler will not choose double arithmetics for you based on the type you decide to store the result in. The compiler will choose the best operator overload based on the types of the operands which in this case are all int and look for a valid implicit conversion (in this case int to double) afterwards; therefore it is choosing int arithmetics and the operation will overflow in the last two test cases:
Int32.MinValue / 60 * 60 * 60 = Int32.MinValue * 60 < Int32.MinValue which will overflow.
120 + 180 * 200000 * 60 * 60 > Int32.MaxValue which will also overflow.
Your expected results for these two cases are probably not considering this behavior.
In order to solve this issue, change your code to:
double seconds = sec + 60 * (min + 60f * deg);
Explicitly setting 60 to a double typed literal constant (60f) will force the compiler to resolve all operations to double arithmetics.
Also, it is worth pointing out that your constructor logic has some other issues:
You should be validating the input data; should it be valid to specify negative minutes or seconds? IMO that doesn't seem reasonable. Only deg should be allowed to have a negative value. You should check for this condition and act accordingly: throw an exception (preferable) or normalize sign of min and sec based on the sign of deg (ugly and potentially confusing).
Your seconds calculation doesn't seem to be correct for negative angles (again, this is tied to the previous issue and whatever sign convention you have decided to implement). Unless the convention is that negative angles must have negative deg, min and sec, the way you are computing seconds is wrong because you are always adding the minutes and seconds terms no matter the sign of deg.
UPDATE There is one more issue in your code that I missed until I had the chance to test it. Some of your test cases are failing because double doesn't have enough resolution. I think your code needs some major refactoring; normalize() should be called first. This way you will always be managing tightly bounded values that can not cause overflows or precision loss.
This is the way I would do it:
public Angle(int deg, int min, int sec)
{
//Omitting input values check.
double seconds = sec + 60 * (min + 60 * normalize(deg));
value = seconds * Math.PI / 648000f;
}
private int normalize(int deg)
{
int normalizedDeg = deg % 360;
if (normalizedDeg <= -180)
normalizedDeg += 360;
else if (normalizedDeg > 180)
normalizedDeg -= 360;
return normalizedDeg;
}
// For compatibility with the math libraries and other software
// range is (-pi,pi] not [0,2pi), enforced by the following function:
void normalize()
{double twoPi = Math.PI + Math.PI;
while (value <= -Math.PI) value += twoPi;
while (value > Math.PI) value -= twoPi;
}
This is the normalize function that I have
while loops is generally a bad idea. If you deal with small values it's okay, but imagine you have some angle like 1e+25, that'd be 1.59e+24 iterations or about 100 million years to compute if you have a decent CPU.
How it should be done instead:
static double NormalizeDegree360(double value)
{
var result = value % 360.0;
return result > 0 ? result : result + 360;
}
static double NormalizeDegree180(double value)
{
return (((value + 180) % 360) + 360) % 360 - 180;
}
static double TwoPI = 2*System.Math.PI;
static double NormalizeRadians2Pi(double value)
{
var result = value % TwoPI;
return result > 0 ? result : result + TwoPI;
}
static double NormalizeRadiansPi(double value)
{
return (((value + System.Math.PI) % TwoPI) + TwoPI) % TwoPI - System.Math.PI;
}
They're using the very large negative and positive numbers to make sure that the normalization caps angles to the range [-180, 180] degrees properly.

Generating a random number that isn't biased against positive proper fractions

I apologize for the simplicity of this question, it's just past 3AM and I can't think :)
I need to get a random number n between 0.25 and 10.0, however I need P( 0.25 <= n < 1.0 ) == P( 1.0 < n <= 10.0 ) && n != 1.0.
Right now my current code is biased towards 1.0 <= n <= 10.0 like so:
Double n = new Random().NextDouble(); // 0 <= n <= 1.0
n = 0.25 + (10.0 * n);
Of course this also has a bug where n == 10.25 if n = 1.0 initially.
Ta!
If I understand correctly, you want this:
var random = new Random();
Double n1 = random.NextDouble(); // 0 <= n < 1.0
Double n = random.NextDouble(); // 0 <= n < 1.0
if (n1 < 0.5)
n = 0.25 + 0.75 * n; // 0.25 <= n < 1.0
else
n = 10.0 - 9.0 * n; // 1 < n <= 10
This function should work:
double GetRandomValue(Random rand)
{
return rand.Next(0, 2) == 0
? 0.25 + 0.75 * rand.NextDouble()
: 10.0 - 9.0 * rand.NextDouble();
}
First random value selects whether you should use values below or above 1. For values above 1, you wanted to include 10.0 in the range, hence the subtraction.
I think this is it.
Double n = new Random().NextDouble();
n = n < 0.5? (0.25 + (0.75 * n * 2)) : 1.0 + double.Epsilon + (9.0 * n * 2);
The double.Epsilon ensures you don't get n=1.0 since NextDouble() produces 0 <= n < 1.0.
Double n = new Random().NextDouble(); // 0 <= n <= 1.0
Double extremlySmallValue = 0.00000001;
n *= 9.75; // 0 <= n <= 9.75
n += 0.25; // 0.25 <= n <= 10.0
//few ifs now:
if(n == 10.0)
n -= extremlySmallValue;
else if (n==1.0)
n += extremlySmallValue;
Its not perfect linear distribution, but i think its acceptable, because Random does not provide perfect linear distribution too.. You can also make another NextDouble() when you get 1.0 or 10.0
You don't say what distribution you want within each range, however, assuming you want uniformity, just linearly map the bottom half of NextDouble's range to your lower range, and the top half to the top range.
However, there's a little thought required with getting the half-openness of your target intervals right. NextDouble returns a double in [0, 1) (that is, lower-inclusive and upper-exclusive). We can split this in half to [0, 0.5) and [0.5, 1) but then since your required upper range is (1, 10], we should flip the upper range during the transform.
var n = myRandom.NextDouble();
if (n < 0.5)
{
// Map from [0, 0.5) to [0.25, 1.0)
return 0.25 + (n * 1.5);
}
else
{
// Map from [0.5, 1.0) to (1.0, 10] by reversing the input range
var flipped = 1.0 - n;
// Now flipped is in [0.5, 0), which is to say (0, 0.5]
// So scale up by 18 times to get a value in (0, 9], and shift
return 1 + (flipped * 18);
}

Rounding integers to nearest multiple of 10 [duplicate]

This question already has answers here:
Returning the nearest multiple value of a number
(6 answers)
Closed 3 years ago.
I am trying to figure out how to round prices - both ways. For example:
Round down
43 becomes 40
143 becomes 140
1433 becomes 1430
Round up
43 becomes 50
143 becomes 150
1433 becomes 1440
I have the situation where I have a price range of say:
£143 - £193
of which I want to show as:
£140 - £200
as it looks a lot cleaner
Any ideas on how I can achieve this?
I would just create a couple methods;
int RoundUp(int toRound)
{
if (toRound % 10 == 0) return toRound;
return (10 - toRound % 10) + toRound;
}
int RoundDown(int toRound)
{
return toRound - toRound % 10;
}
Modulus gives us the remainder, in the case of rounding up 10 - r takes you to the nearest tenth, to round down you just subtract r. Pretty straight forward.
You don't need to use modulus (%) or floating point...
This works:
public static int RoundUp(int value)
{
return 10*((value + 9)/10);
}
public static int RoundDown(int value)
{
return 10*(value/10);
}
This code rounds to the nearest multiple of 10:
int RoundNum(int num)
{
int rem = num % 10;
return rem >= 5 ? (num - rem + 10) : (num - rem);
}
Very simple usage :
Console.WriteLine(RoundNum(143)); // prints 140
Console.WriteLine(RoundNum(193)); // prints 190
A general method to round a number to a multiple of another number, rounding away from zero.
For integer
int RoundNum(int num, int step)
{
if (num >= 0)
return ((num + (step / 2)) / step) * step;
else
return ((num - (step / 2)) / step) * step;
}
For float
float RoundNum(float num, float step)
{
if (num >= 0)
return floor((num + step / 2) / step) * step;
else
return ceil((num - step / 2) / step) * step;
}
I know some parts might seem counter-intuitive or not very optimized. I tried casting (num + step / 2) to an int, but this gave wrong results for negative floats ((int) -12.0000 = -11 and such). Anyways these are a few cases I tested:
any number rounded to step 1 should be itself
-3 rounded to step 2 = -4
-2 rounded to step 2 = -2
3 rounded to step 2 = 4
2 rounded to step 2 = 2
-2.3 rounded to step 0.2 = -2.4
-2.4 rounded to step 0.2 = -2.4
2.3 rounded to step 0.2 = 2.4
2.4 rounded to step 0.2 = 2.4
Divide the number by 10.
number = number / 10;
Math.Ceiling(number);//round up
Math.Round(number);//round down
Then multiply by 10.
number = number * 10;
public static int Round(int n)
{
// Smaller multiple
int a = (n / 10) * 10;
// Larger multiple
int b = a + 10;
// Return of closest of two
return (n - a > b - n) ? b : a;
}

Categories

Resources