Estimate cubic polynomial that maps set x to set y - c#

I have a (sampled) set of uncalibrated values (x) coming from a device and a set of what they should be (y). I'm looking to find/estimate the cubic polynomial y=ax^3 + bx^2 + cx + d that maps any x to y.
So I think what I need to do is Polynomial Regression first and then find its inverse, but I'm not so sure; and I wonder whether there is a better solution like least squares.
I would appreciate a nudge in the right direction and/or any links to a math library that would be of use.

Have you checked Langrange Interpolation?
It is about the polynomial approximation of a given function.
You can stop the approximation on a given degree of the polynomial (let's say the 3rd degree) on a proper range of the indipendent variable.
Refs:
http://en.wikipedia.org/wiki/Lagrange_polynomial
https://math.stackexchange.com/a/108623

Looks like its just Polynomial Regression; I just need to feed in the raw (x) values and the expected values (y).
Code from Rosetta Code, that uses Math.Net Numerics
using MathNet.Numerics.LinearAlgebra.Double;
using MathNet.Numerics.LinearAlgebra.Double.Factorization;
public static class PolyRegression
{
public static double[] Polyfit(double[] x, double[] y, int degree)
{
// Vandermonde matrix
var v = new DenseMatrix(x.Length, degree + 1);
for (int i = 0; i < v.RowCount; i++)
for (int j = 0; j <= degree; j++) v[i, j] = Math.Pow(x[i], j);
var yv = new DenseVector(y).ToColumnMatrix();
QR qr = v.QR();
// Math.Net doesn't have an "economy" QR, so:
// cut R short to square upper triangle, then recompute Q
var r = qr.R.SubMatrix(0, degree + 1, 0, degree + 1);
var q = v.Multiply(r.Inverse());
var p = r.Inverse().Multiply(q.TransposeThisAndMultiply(yv));
return p.Column(0).ToArray();
}
}

Related

Using FFTW in C# to compute HilbertTransform

I want to implement the Hilbert Transform in C#. From this article I saw that the fastest FFT open source implementation seems to be the FFTW, so I downloaded that example and used it to learn how to use the fftw wrapper for C#.
I have a current signal of 200.000 points which I'm using for testing. Getting the Hilbert transform through the fft is relatively simple:
Compute the fft.
Multiply by 2 all positive frequencies except for the DC and Nyquist components (0 and n/2 + 1, if the sample size is even).
Multiply by 0 all the negative frequencies ([n/2 + 1, n]).
Compute the inverse fft.
This far, I've done all of it. The only problem is the inverse fft. I'm not able to get the same results with fftw than with the ifft from Matlab.
My code
RealArray _input;
ComplexArray _fft;
void ComputeFFT()
{
_fft = new ComplexArray(_length / 2 + 1);
_input.Set(Data);
_plan = Plan.Create1(_length, _input, _fft, Options.Estimate);
_plan.Execute();
}
This far, I've a fft with only the positive frequencies. So I don't need to multiply by zero the negative frequencies: they don't even exist. With the following code, I can get my original signal back:
double[] ComputeIFFT(ComplexArray input)
{
double[] temp = new double[_length];
RealArray output = new RealArray(_length);
_plan = Plan.Create1(_length, input, output, Options.Estimate);
_plan.Execute();
temp = output.ToArray();
for (int i = 0; i < _length; ++i)
{
temp[i] /= _length;
}
return temp;
}
The problem comes when I try to get a complex inverse from the signal.
void ComputeHilbert()
{
double[] fft = FFT.ToArray();
double[] h = new double[_length / 2 + 1];
double[] temp = new double[_length * 2];
bool fftLengthIsOdd = (_length | 1) == 1;
h[0] = 1;
for (int i = 1; i < _length / 2; i++) h[i] = 2;
if (!fftLengthIsOdd) h[(_length / 2)] = 1;
for (int i = 0; i <= _length / 2; i++)
{
temp[2 * i] = fft[2*i] * h[i];
temp[2 * i + 1] = fft[2*i + 1] * h[i];
}
ComplexArray _tempHilbert = new ComplexArray(_length);
_tempHilbert.Set(temp);
_hilbert = ComputeIFFT(_tempHilbert);
_hilbertComputed = true;
}
It's important to note that, when I do apply the ToArray() method on a ComplexArray object, I get as result a double[] with twice as length as the original array, having the real and imaginary parts consecutive. That's it, for a ComplexArray object containing "3 + 1i", I would get a double vector with [3, 1].
So, at this moment, what I have is something like:
[DC Frequency, 2*positive frequencies, Nyquist Frequency, zeros]
If I export this data to Matlab and compute the IFFT, I get the same result as its hilbert(signal).
However, if I try to apply the IFFT provided by fftw, I get weird values from Nyquist Frequency to the end (that is to say, the zeros mess with fftw).
This is the ifft I'm using to do this:
double[] ComputeIFFT(ComplexArray input)
{
double[] temp;
ComplexArray output = new ComplexArray(_length);
_plan = Plan.Create1(_length, input, output, Direction.Backward, Options.Estimate);
_plan.Execute();
temp = output.ToArray();
for (int i = 0; i < _length; ++i)
{
temp[i] /= _length;
}
return temp;
}
So, just to sum it up, my problem is the way I'm using to calculate the ifft. It doesn't seems to work well with zeros. Or maybe Matlab is capable to understand that it has to apply some different approach and I should do it manually, but I don't know how.
Thank you very much for your help in advance, much appreciated!
So the problem was the ComputeIFFT function. In the for loop, I was doing i < _length, but the length of temp array is 2 * _length, because it holds both real and imaginary values.
That's why I only got half of the values right.
The correct code for it is:
double[] ComputeIFFT(ComplexArray input)
{
double[] temp;
ComplexArray output = new ComplexArray(_length);
_plan = Plan.Create1(_length, input, output, Direction.Backward, Options.Estimate);
_plan.Execute();
temp = output.ToArray();
for (int i = 0; i < temp.Length; ++i)
{
temp[i] /= _length;
}
return temp;
}
I hope this will be useful for anyone trying to implement the Hilbert Transform through FFTW in C#.

constrained optimization - to find optimal translation vector in Unity3d

I am new to optimization problem and gone through several math library like Alglib, DotNumerics and Microsoft Solver Foundation, I have no luck on how to kick start, perhaps some experts can shed some lights.
I wanted to get optimal translation from 3d points on reference contour to target contour.
Below is the constrained optimization problem. How do I optimize it if I wish to use DotNumerics for instance, I have no idea how to kickstart:
Pr : 3d points on reference contour
Pt : 3d points on target contour
t(Pr): translation vector of point Pr <-- this is what I looking for
Below is the example provided by DotNumerics, how should I take all my 3d points as input and churn out translation vector?
public void OptimizationLBFGSBConstrained()
{
//This example minimize the function
//f(x0,x2,...,xn)= (x0-0)^2+(x1-1)^2+...(xn-n)^2
//The minimum is at (0,1,2,3...,n) for the unconstrained case.
//using DotNumerics.Optimization;
L_BFGS_B LBFGSB = new L_BFGS_B();
int numVariables = 5;
OptBoundVariable[] variables = new OptBoundVariable[numVariables];
//Constrained Minimization on the interval (-10,10), initial Guess=-2;
for (int i = 0; i < numVariables; i++) variables[i] = new OptBoundVariable("x" + i.ToString(), -2, -10, 10);
double[] minimum = LBFGSB.ComputeMin(ObjetiveFunction, Gradient,variables);
ObjectDumper.Write("L-BFGS-B Method. Constrained Minimization on the interval (-10,10)");
for (int i = 0; i < minimum.Length; i++) ObjectDumper.Write("x" + i.ToString() + " = " + minimum[i].ToString());
//Constrained Minimization on the interval (-10,3), initial Guess=-2;
for (int i = 0; i < numVariables; i++) variables[i].UpperBound = 3;
minimum = LBFGSB.ComputeMin(ObjetiveFunction, Gradient, variables);
ObjectDumper.Write("L-BFGS-B Method. Constrained Minimization on the interval (-10,3)");
for (int i = 0; i < minimum.Length; i++) ObjectDumper.Write("x" + i.ToString() + " = " + minimum[i].ToString());
//f(x0,x2,...,xn)= (x0-0)^2+(x1-1)^2+...(xn-n)^2
//private double ObjetiveFunction(double[] x)
//{
// int numVariables = 5;
// double f = 0;
// for (int i = 0; i < numVariables; i++) f += Math.Pow(x[i] - i, 2);
// return f;
//}
//private double[] Gradient(double[] x)
//{
// int numVariables = 5;
// double[] grad = new double[x.Length];
// for (int i = 0; i < numVariables; i++) grad[i] = 2 * (x[i] - i);
// return grad;
//}
}
Edit 1:
To make things complicated, I have added the real problem been working on Unity. I sampled 5 iso-contour lines from the reference model and did the same to target model(different mesh and different vertices position) - On first iso-contour on reference model, I sampled 8 normalized points(equally split by distance), still I did the same to the target model. Therefore, I have 2 pairs of corresponding point sets (target model normalized points position will always change given every user has different body size) - Next, I repeat steps mentioned above to cover rest of iso-contours. Once I done that, I will be using formula above to optimize the problem in order to get the optimal translation vector so that I can translate all vertices from reference to target model with 1 single translation vector (not sure this is possible) - Is this how optimization works?
Please ignore red line in the yellow iso-contour

go through matrix and get the lowest sum only going from left to right and from up to down only

I'm stuck with a college project and I wonder if you can help me have a hint on how to do this, I have to do it on c#.
Using an 80x80 matrix I have to go through it only from left to right and from up to down so I can find the path that gives me the lowest number when sum all the values from top left corner to bottom right corner.
As an example on this case the numbers that should be picked up are:
131,201,96,342,746,422,121,37,331 = 2427 the lowest number
It does not matter how many times you move to the right or down but what matters is to get the lowest number.
This is an interesting project in that it illustrates an important technique called dynamic programming: a solution to the entire problem can be constructed from a solution to a smaller sub-problem with a simple computation step.
Start with a recursive solution that wouldn't work for large matrix:
// m is the matrix
// R (uppercase) is the number of rows; C is the number of columns
// r (lowercase) and c are starting row/column
int minSum(int[,] m, int R, int C, int r, int c) {
int res;
if (r == R-1 && c == C-1) {
// Bottom-right corner - one answer
res = m[r,c];
} else if (r == R-1) {
// Bottom row - go right
res = m[r,c] + minSum(m, R, C, r, c+1);
} else if (c == C-1) {
// Rightmost column - go down
res = m[r,c] + minSum(m, R, C, r+1, c);
} else {
// In the middle - try going right, then try going down
int goRight = m[r,c] + minSum(m, R, C, r, c+1);
int goDown = m[r,c] + minSum(m, R, C, r+1, c);
res = Math.Min(goRight, goDown);
}
return res;
}
This will work for a 10×10 matrix, but it would take too long for a 80×80 matrix. However, it provides a template for a working solution: if you add a separate matrix of results you obtained at earlier steps, you would transform it into a faster solution:
// m is the matrix
// R (uppercase) is the number of rows; C is the number of columns
// known is the matrix of solutions you already know
// r (lowercase) and c are starting row/column
int minSum(int[,] m, int R, int C, int?[,] known, int r, int c) {
if (known[r,c].HasValue) {
return known[r,c];
}
int res;
... // Computation of the result goes here
known[r,c] = res;
return res;
}
This particular technique of implementing dynamic programming solutions is called memoization.
First step is always analysis, in particular to try to figure out the scale of the problem.
Ok assuming you can only ever step down or to the right, you will have 79 steps down and 79 steps to the right. 158 steps total of the form 011100101001 (1=move right, 0=move down) etc. Note that the solution space is not as much as 2^158 since not all binary numbers are possible... you must have exactly 79 downs and 79 rights. From combinatorics, this limits the number of possible correct answers to 158!/79!79!, which evaluates to still a very large number, something like 10^46.
You should realize is that this is quite large to brute-force, which methodology otherwise should definitely be a consideration for you if the project does not specifically rule it out, since it invariably makes the algorithm simpler (e.g. by simply iterating all the solution possibilities). I imagine the question has been designed this way in order to require you to use an algorithm that does not brute-force the correct answer.
The way to solve this problem without iterating the whole solution space is to realize that the best path to the lower right corner is the better of the two best paths to the squares immediately to the left of, and above, the lower right corner, and the best path to those is the best path to the next diagonal (numbers 524, 121, 111 in your diagram), and so on.
What you need to do is to treat each cell as a node in a graph and implement shortest path algorithm.
Dijkstra algorithm is one of them. You can find more information here https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
It is really simple, because You can divide the problem into solved and unsolved part and move items from unsolved into solved one by one. Start on top left and move through all "/" diagonals towards bottom right.
int size = 5;
int[,] matrix = new int[,] {
{131,673,234,103,18},
{201,96,342,965,150},
{630,803,746,422,111},
{537,699,497,121,956},
{805,732,524,37,331}
};
//Random rand = new Random();
//for (int y = 0; y < size; ++y)
//{
// for (int x = 0; x < size; ++x)
// {
// matrix[y, x] = rand.Next(10);
// }
//}
int[,] distance = new int[size, size];
distance[0, 0] = matrix[0, 0];
for (int i = 1; i < size * 2 - 1; ++i)
{
int y = Math.Min(i, size - 1);
int x = i - y;
while (x < size && y >= 0)
{
distance[y, x] = Math.Min(
x > 0 ? distance[y, x - 1] + matrix[y, x] : int.MaxValue,
y > 0 ? distance[y - 1, x] + matrix[y, x] : int.MaxValue);
x++;
y--;
}
}
for (int y = 0; y < size; ++y)
{
for (int x = 0; x < size; ++x)
{
Console.Write(matrix[y, x].ToString().PadLeft(5, ' '));
}
Console.WriteLine();
}
Console.WriteLine();
for (int y = 0; y < size; ++y)
{
for (int x = 0; x < size; ++x)
{
Console.Write(distance[y, x].ToString().PadLeft(5, ' '));
}
Console.WriteLine();
}
Console.WriteLine();

Optimizing an adjacency matrix creation from a 2D array of nodes

I am attempting to create an adjacency matrix from a 2D array of nodes. The adjacency matrix will be passed to a program that will cluster the nodes either through
Spectral clustering algorithm
Kmeans clustering algorithm
**Node class **
public class Node{
public int _id;
public bool _isWalkable;
public int _positionX;
public int _positionY;
public Vector3 _worldPosition;
}
Grid Class
public class Grid : MonoBehaviour
{
void CreateGrid()
{
grid = new Node[_gridSizeX, _gridSizeY];
Vector3 worldBottomLeft = transform.position -
Vector3.right * worldSize.x / 2 - Vector3.forward * worldSize.y / 2;
//set the grid
int id = 0;
for (int x = 0; x < _gridSizeX; x++)
{
for (int y = 0; y < _gridSizeY; y++)
{
Vector3 worldPosition = worldBottomLeft + Vector3.right *
(x * _nodeDiameter + _nodeRadius) +
Vector3.forward * (y * _nodeDiameter + _nodeRadius);
//check to see if current position is walkable
bool isWalkable =
!Physics.CheckSphere(worldPosition, _nodeRadius, UnwalkableMask);
grid[x, y] = new Node(isWalkable, worldPosition, x, y);
grid[x, y].Id = id ++;
}
}
totalNodes = id;
}
}
Nodes are stored inside a 2D array called grid and represent a walkable path for a character to move on. I have succesfully implemented an A* algorithm with a euclidean distance heuristic. What I would like to do is cluster these nodes using the aforementioned clustering algorithms, but first I need to create an adjacency algorithm for them. This is the best pseudocode I could come up with
int[][] _adjacencyMatrix = new int[gridSizeX*gridSizeY][gridSizeX*gridSizeY];
for(int x = 0; x < gridSize;x< XgridSize; i++)
{
for(int y = 0; y < gridSize;y< YgridSize; i++)
{
if( !Grid[x][y]._isWalkable)
continue;
Node n = Grid[x][y];
List<Node> neighbors = GetNeighbors(n);
for(int k; k<neighbors.Count(); k++)
{
_adjacencyMatrix[n._id][neighbors[k]._id]=1;
}
}
}
public List<Node> GetNeighbours(Node n)
{
//where is this node in the grid?
List<Node> neighbours = new List<Node>();
//this will search in a 3X3 block
for (int x = -1; x <= 1; x++)
{
for (int y = -1; y <= 1; y++)
{
if (x == 0 && y == 0)
continue; //we're at the current node
int checkX = n._positionX + x;
int checkY = n._positionY + y;
if (checkX >= 0 && checkX < _gridSizeX && checkY >= 0
&& checkY < _gridSizeY)
{
if(grid[checkX, checkY]._isWalkable)
neighbours.Add(grid[checkX, checkY]);
else
continue;
}
}
}
return neighbours;
}
My main concern
My main concern with this is the total complexity of the above algorithm. It feels like it's going to be heavy and I have a total of (75^2 = 5625) nodes in a adjacency matrix that will be 5625X5625 in size! There must be a better way to find the neighbors than this, is there?
The matrix is symmetric, so you only need to save half of it, see (How to store a symmetric matrix?) for an example. The matrix values are binary, so saving them as booleans or in a bit vector will cut down memory by a factor of 4 or 32, respectively.
Alternatively, since the check for two adjacent nodes takes constant time (abs(n1.x - n2.x) <= 1 && abs(n1.y - n1.y) <= 1 && grid[n1.x, n2.x].isWalkable() && grid[n2.x, n2.y]), you could just pass the clustering algorithm a function which checks for adjacency on-the-fly.
5k by 5k is not very large. 100 MB is something you can keep in memory. If you want to avoid this cost, do not use algorithms based on distance matrixes!
However, since your similarity appears to be
d(x,y) = 1 if adjacent and both nodes walkable else 0
your results will degenerate. If you are lucky, you get something like connected components (which you could have gotten much easier).
Pairwise shortest paths would be more useful, but also more expensive to build. Maybe consider solving this first, though. Having a full adjacency matrix is a good starting point I guess.
k-means cannot work with pairwise distances at all. It needs distances point-to-mean only, for arbitrary means.
I suggest to look at graph algorithms, and spend some more time understanding your objective, before trying to squeeze the data into clustering algorithms that may be solving a different problem.

How can I improve this square root method?

I know this sounds like a homework assignment, but it isn't. Lately I've been interested in algorithms used to perform certain mathematical operations, such as sine, square root, etc. At the moment, I'm trying to write the Babylonian method of computing square roots in C#.
So far, I have this:
public static double SquareRoot(double x) {
if (x == 0) return 0;
double r = x / 2; // this is inefficient, but I can't find a better way
// to get a close estimate for the starting value of r
double last = 0;
int maxIters = 100;
for (int i = 0; i < maxIters; i++) {
r = (r + x / r) / 2;
if (r == last)
break;
last = r;
}
return r;
}
It works just fine and produces the exact same answer as the .NET Framework's Math.Sqrt() method every time. As you can probably guess, though, it's slower than the native method (by around 800 ticks). I know this particular method will never be faster than the native method, but I'm just wondering if there are any optimizations I can make.
The only optimization I saw immediately was the fact that the calculation would run 100 times, even after the answer had already been determined (at which point, r would always be the same value). So, I added a quick check to see if the newly calculated value is the same as the previously calculated value and break out of the loop. Unfortunately, it didn't make much of a difference in speed, but just seemed like the right thing to do.
And before you say "Why not just use Math.Sqrt() instead?"... I'm doing this as a learning exercise and do not intend to actually use this method in any production code.
First, instead of checking for equality (r == last), you should be checking for convergence, wherein r is close to last, where close is defined by an arbitrary epsilon:
eps = 1e-10 // pick any small number
if (Math.Abs(r-last) < eps) break;
As the wikipedia article you linked to mentions - you don't efficiently calculate square roots with Newton's method - instead, you use logarithms.
float InvSqrt (float x){
float xhalf = 0.5f*x;
int i = *(int*)&x;
i = 0x5f3759df - (i>>1);
x = *(float*)&i;
x = x*(1.5f - xhalf*x*x);
return x;}
This is my favorite fast square root. Actually it's the inverse of the square root, but you can invert it after if you want....I can't say if it's faster if you want the square root and not the inverse square root, but it's freaken cool just the same.
http://www.beyond3d.com/content/articles/8/
What you are doing here is you execute Newton's method of finding a root. So you could just use some more efficient root-finding algorithm. You can start searching for it here.
Replacing the division by 2 with a bit shift is unlikely to make that big a difference; given that the division is by a constant I'd hope the compiler is smart enough to do that for you, but you may as well try it to see.
You're much more likely to get an improvement by exiting from the loop early, so either store new r in a variable and compare with old r, or store x/r in a variable and compare that against r before doing the addition and division.
Instead of breaking the loop and then returning r, you could just return r. May not provide any noticable increase in performance.
With your method, each iteration doubles the number of correct bits.
Using a table to obtain the initial 4 bits (for example), you will have 8 bits after the 1st iteration, then 16 bits after the second, and all the bits you need after the fourth iteration (since a double stores 52+1 bits of mantissa).
For a table lookup, you can extract the mantissa in [0.5,1[ and exponent from the input (using a function like frexp), then normalize the mantissa in [64,256[ using multiplication by a suitable power of 2.
mantissa *= 2^K
exponent -= K
After this, your input number is still mantissa*2^exponent. K must be 7 or 8, to obtain an even exponent. You can obtain the initial value for the iterations from a table containing all the square roots of the integral part of mantissa. Perform 4 iterations to get the square root r of mantissa. The result is r*2^(exponent/2), constructed using a function like ldexp.
EDIT. I put some C++ code below to illustrate this. The OP's function sr1 with improved test takes 2.78s to compute 2^24 square roots; my function sr2 takes 1.42s, and the hardware sqrt takes 0.12s.
#include <math.h>
#include <stdio.h>
double sr1(double x)
{
double last = 0;
double r = x * 0.5;
int maxIters = 100;
for (int i = 0; i < maxIters; i++) {
r = (r + x / r) / 2;
if ( fabs(r - last) < 1.0e-10 )
break;
last = r;
}
return r;
}
double sr2(double x)
{
// Square roots of values in 0..256 (rounded to nearest integer)
static const int ROOTS256[] = {
0,1,1,2,2,2,2,3,3,3,3,3,3,4,4,4,4,4,4,4,4,5,5,5,5,5,5,5,5,5,5,6,6,6,6,6,6,6,6,6,6,6,6,
7,7,7,7,7,7,7,7,7,7,7,7,7,7,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,9,9,9,9,9,9,9,9,9,9,9,9,9,
9,9,9,9,9,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,11,11,11,11,11,
11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,12,12,12,12,12,12,12,12,12,12,12,12,
12,12,12,12,12,12,12,12,12,12,12,12,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,
13,13,13,13,13,13,13,13,13,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,
14,14,14,14,14,14,14,14,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,
15,15,15,15,15,15,15,15,15,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16 };
// Normalize input
int exponent;
double mantissa = frexp(x,&exponent); // MANTISSA in [0.5,1[ unless X is 0
if (mantissa == 0) return 0; // X is 0
if (exponent & 1) { mantissa *= 128; exponent -= 7; } // odd exponent
else { mantissa *= 256; exponent -= 8; } // even exponent
// Here MANTISSA is in [64,256[
// Initial value on 4 bits
double root = ROOTS256[(int)floor(mantissa)];
// Iterate
for (int it=0;it<4;it++)
{
root = 0.5 * (root + mantissa / root);
}
// Restore exponent in result
return ldexp(root,exponent>>1);
}
int main()
{
// Used to generate the table
// for (int i=0;i<=256;i++) printf(",%.0f",sqrt(i));
double s = 0;
int mx = 1<<24;
// for (int i=0;i<mx;i++) s += sqrt(i); // 0.120s
// for (int i=0;i<mx;i++) s += sr1(i); // 2.780s
for (int i=0;i<mx;i++) s += sr2(i); // 1.420s
}
Define a tolerance and return early when subsequent iterations fall within that tolerance.
Since you said the code below was not fast enough, try this:
static double guess(double n)
{
return Math.Pow(10, Math.Log10(n) / 2);
}
It should be very accurate and hopefully fast.
Here is code for the initial estimate described here. It appears to be pretty good. Use this code, and then you should also iterate until the values converge within an epsilon of difference.
public static double digits(double x)
{
double n = Math.Floor(x);
double d;
if (d >= 1.0)
{
for (d = 1; n >= 1.0; ++d)
{
n = n / 10;
}
}
else
{
for (d = 1; n < 1.0; ++d)
{
n = n * 10;
}
}
return d;
}
public static double guess(double x)
{
double output;
double d = Program.digits(x);
if (d % 2 == 0)
{
output = 6*Math.Pow(10, (d - 2) / 2);
}
else
{
output = 2*Math.Pow(10, (d - 1) / 2);
}
return output;
}
I have been looking at this as well for learning purposes. You may be interested in two modifications I tried.
The first was to use a first order taylor series approximation in x0:
Func<double, double> fNewton = (b) =>
{
// Use first order taylor expansion for initial guess
// http://www27.wolframalpha.com/input/?i=series+expansion+x^.5
double x0 = 1 + (b - 1) / 2;
double xn = x0;
do
{
x0 = xn;
xn = (x0 + b / x0) / 2;
} while (Math.Abs(xn - x0) > Double.Epsilon);
return xn;
};
The second was to try a third order (more expensive), iterate
Func<double, double> fNewtonThird = (b) =>
{
double x0 = b/2;
double xn = x0;
do
{
x0 = xn;
xn = (x0*(x0*x0+3*b))/(3*x0*x0+b);
} while (Math.Abs(xn - x0) > Double.Epsilon);
return xn;
};
I created a helper method to time the functions
public static class Helper
{
public static long Time(
this Func<double, double> f,
double testValue)
{
int imax = 120000;
double avg = 0.0;
Stopwatch st = new Stopwatch();
for (int i = 0; i < imax; i++)
{
// note the timing is strictly on the function
st.Start();
var t = f(testValue);
st.Stop();
avg = (avg * i + t) / (i + 1);
}
Console.WriteLine("Average Val: {0}",avg);
return st.ElapsedTicks/imax;
}
}
The original method was faster, but again, might be interesting :)
Replacing "/ 2" by "* 0.5" makes this ~1.5 times faster on my machine, but of course not nearly as fast as the native implementation.
Well, the native Sqrt() function probably isn't implemented in C#, it'll most likely be done in a low-level language, and it'll certainly be using a more efficient algorithm. So trying to match its speed is probably futile.
However, in regard to just trying to optimize your function for the heckuvit, the Wikipedia page you linked recommends the "starting guess" to be 2^floor(D/2), where D represents the number of binary digits in the number. You could give that an attempt, I don't see much else that could be optimized significantly in your code.
You can try
r = x >> 1;
instead of / 2 (also in the other place you device by 2).
It might give you a slight edge.
I would also move the 100 into the loop. Probably nothing, but we are talking about ticks in here.
just checking it now.
EDIT:
Fixed the > into >>, but it doesn't work for doubles, so nevermind.
the inlining of the 100 gave me no speed increase.

Categories

Resources