Starting value in optimization algorithms - c#

I am trying to use the L-BFGS solver in Accord.net maths package in C#.
However, I cannot find how to define the starting value of the optimization.
How can we define it ?
According to official examples, the following syntax defines the initial value of x in the optimization process. However it does not work properly in the following example - as if another starting point was used by the algorithm.
//Target function to minimize;
public double f(double[] x) {
double z = Math.Cos(x[0])-0.2*x[0] + x[1] * x[1]; //Function with multiple local minima : x ~ { (2n+1)pi , 0 }
return z;
}
//Gradient
public double[] g(double[] x) {
double[] grad = {-Math.Sin(x[0])-0.2 , 2 * x[1]};
return grad;
}
double[] x = {3*3.141592,0}; // Starting value (local minimum, -2.88)
var lbfgs = new BroydenFletcherGoldfarbShanno(numberOfVariables: 2, function: f, gradient: g);
bool success = lbfgs.Minimize();
double minValue = lbfgs.Value;
double[] solution = lbfgs.Solution; // {3.34,0} This solution is a local min that has a higher value (-1.65) than the local min next to which we started !!

The syntax is simply:
lbfgs.Minimize(x);
Thank you "500 - Internal Server Error" !

Related

How to get max value of List<> in double datatype

I am trying to get Max value from List<>, but it's returning rounded value to integer. Is there some special way how to proceed this?
private List<double> dataX = new List<double>();
double maxVal = dataX.Max<double>();
Debug.WriteLine("max: " + maxVal);
Edit:
As requested here is feeding data:
for (int i = 0; i < 10; i++)
{
data.Add(new ChartData(i, rand.NextDouble() * 10));
Debug.WriteLine(data.Last<ChartData>().Y);
}
My debug window shows this:
5,9358753151893
7,87125875608588
3,77212246589927
9,36056426230844
2,27154730924943
9,80201833872218
5,7350595275569
3,04650606729393
5,81677517658881
0,0514464220271662
max: 8
So I don't think the feeding side is wrong. And for whole picture, here you can see ChartData type:
public class ChartData
{
public double X { get; set; }
public double Y { get; set; }
public ChartData(double X, double Y)
{
this.X = X;
this.Y = Y;
}
}
And how I'm getting simple List from my ChartData class:
private List<ChartData> data = new List<ChartData>();
private List<double> dataX = new List<double>();
void updateMaxMin()
{
dataX.Clear();
dataY.Clear();
for (int i = 0; i < data.Count - 1; i++)
{
dataX.Add(data[i].X);
dataY.Add(data[i].Y);
}
}
There are two likely scenarios here.
You are rounding the values as you enter them into the list (as #sam mentioned in his comment).
You are expecting a double value ending in 0 to show these decimal places. A double will always drop off the insignificant digits. So for example, 1.500 will be truncated to 1.5. This is how doubles were intended to work. Another article that briefly talks about this is Double Skips last decimal if zero. If you are looking for a different Visual output, I would recommend converting the result to a string and then using string formatting. An example would be the following (using 2 decimal places):
Console.WriteLine(string.Format("max: {0:0.00}", maxVal));
Most likely the problem is in the way you insert into the list as some had suggested in here (you mentioned about rounded to an integer, so I'm assuming it is probably not visual display related).
Try debug your data in the list:
private List<double> dataX = new List<double>();
...
foreach(var data in dataX)
{
Debug.WriteLine("data: " + data);
}
double maxVal = dataX.Max<double>();
Debug.WriteLine("max: " + maxVal);
A possible issue with the way you populate the list could be something like:
var myNum = new[] { 1, 2, 3, 4, 5, 6 };
foreach (var num in myNum)
{
dataX.Add(num / 2);
}
The data that was added into the dataX is actually an integer (as the division by 2 returns an integer).
double doesn't keep insignificant digits. - there's no difference between 9 and 9.0 and 9.0000
If you want just display purpose use refer this link C# Double - ToString() formatting with two decimal places but no rounding for convert the double to string with decimal places. but,if you using calculation it's no need for keeping with decimal places.
Ok I found mistake. I have been calculating max value from different array. So i got correct max value anyway. There should be:
double maxVal = dataY.Max<double>();
instead of
double maxVal = dataX.Max<double>();
So I guess, this isn't much helping, so I will delete this question after you realized I did basic fault.
Thank you all anyway.

What wrong with this implement of this arcsine approximate in C#

This is a formula to approximate arcsine(x) using Taylor series from this blog
This is my implementation in C#, I don't know where is the wrong place, the code give wrong result when running:
When i = 0, the division will be 1/x. So I assign temp = 1/x at startup. For each iteration, I change "temp" after "i".
I use a continual loop until the two next value is very "near" together. When the delta of two next number is very small, I will return the value.
My test case:
Input is x =1, so excected arcsin(X) will be arcsin (1) = PI/2 = 1.57079633 rad.
class Arc{
static double abs(double x)
{
return x >= 0 ? x : -x;
}
static double pow(double mu, long n)
{
double kq = mu;
for(long i = 2; i<= n; i++)
{
kq *= mu;
}
return kq;
}
static long fact(long n)
{
long gt = 1;
for (long i = 2; i <= n; i++) {
gt *= i;
}
return gt;
}
#region arcsin
static double arcsinX(double x) {
int i = 0;
double temp = 0;
while (true)
{
//i++;
var iFactSquare = fact(i) * fact(i);
var tempNew = (double)fact(2 * i) / (pow(4, i) * iFactSquare * (2*i+1)) * pow(x, 2 * i + 1) ;
if (abs(tempNew - temp) < 0.00000001)
{
return tempNew;
}
temp = tempNew;
i++;
}
}
public static void Main(){
Console.WriteLine(arcsin());
Console.ReadLine();
}
}
In many series evaluations, it is often convenient to use the quotient between terms to update the term. The quotient here is
(2n)!*x^(2n+1) 4^(n-1)*((n-1)!)^2*(2n-1)
a[n]/a[n-1] = ------------------- * --------------------- -------
(4^n*(n!)^2*(2n+1)) (2n-2)!*x^(2n-1)
=(2n(2n-1)²x²)/(4n²(2n+1))
= ((2n-1)²x²)/(2n(2n+1))
Thus a loop to compute the series value is
sum = 1;
term = 1;
n=1;
while(1 != 1+term) {
term *= (n-0.5)*(n-0.5)*x*x/(n*(n+0.5));
sum += term;
n += 1;
}
return x*sum;
The convergence is only guaranteed for abs(x)<1, for the evaluation at x=1 you have to employ angle halving, which in general is a good idea to speed up convergence.
You are saving two different temp values (temp and tempNew) to check whether or not continuing computation is irrelevant. This is good, except that you are not saving the sum of these two values.
This is a summation. You need to add every new calculated value to the total. You are only keeping track of the most recently calculated value. You can only ever return the last calculated value of the series. So you will always get an extremely small number as your result. Turn this into a summation and the problem should go away.
NOTE: I've made this a community wiki answer because I was hardly the first person to think of this (just the first to put it down in a comment). If you feel that more needs to be added to make the answer complete, just edit it in!
The general suspicion is that this is down to Integer Overflow, namely one of your values (probably the return of fact() or iFactSquare()) is getting too big for the type you have chosen. It's going to negative because you are using signed types — when it gets to too large a positive number, it loops back into the negative.
Try tracking how large n gets during your calculation, and figure out how big a number it would give you if you ran that number through your fact, pow and iFactSquare functions. If it's bigger than the Maximum long value in 64-bit like we think (assuming you're using 64-bit, it'll be a lot smaller for 32-bit), then try using a double instead.

Can't get cost function for logistic regression to work

I'm trying to implement logistic regression by myself writing code in C#. I found a library (Accord.NET) that I use to minimize the cost function. However I'm always getting different minimums. Therefore I think something may be wrong in the cost function that I wrote.
static double costfunction(double[] thetas)
{
int i = 0;
double sum = 0;
double[][] theta_matrix_transposed = MatrixCreate(1, thetas.Length);
while(i!=thetas.Length) { theta_matrix_transposed[0][i] = thetas[i]; i++;}
i = 0;
while (i != m) // m is the number of examples
{
int z = 0;
double[][] x_matrix = MatrixCreate(thetas.Length, 1);
while (z != thetas.Length) { x_matrix[z][0] = x[z][i]; z++; } //Put values from the training set to the matrix
double p = MatrixProduct(theta_matrix_transposed, x_matrix)[0][0];
sum += y[i] * Math.Log(sigmoid(p)) + (1 - y[i]) * Math.Log(1 - sigmoid(p));
i++;
}
double value = (-1 / m) * sum;
return value;
}
static double sigmoid(double z)
{
return 1 / (1 + Math.Exp(-z));
}
x is a list of lists that represent the training set, one list for each feature. What's wrong with the code? Why am I getting different results every time I run the L-BFGS? Thank you for your patience, I'm just getting started with machine learning!
That is very common with these optimization algorithms - the minima you arrive at depends on your weight initialization. The fact that you are getting different minimums doesn't necessarily mean something is wrong with your implementation. Instead, check your gradients to make sure they are correct using the finite differences method, and also look at your train/validation/test accuracy to see if they are also acceptable.

Grid Computing API

I want to write a distributed software system (system where you can execute programs faster than on a single pc), that can execute different kinds of programs.(As it is a school project, I'll probably execute programs like Prime finder and Pi calculator on it)
My preferences is that it should written for C# with .NET, have good documentation, be simple to write(not new in C# with .NET, but I'm not professional) and to be able to write tasks for the grid easily and/or to load programs to the network directly from .exe.
I've looked a little at the:
MPAPI
Utilify(from the makers of Alchemy)
NGrid (Outdated?)
Which one is the best for my case? Do you have any experience with them?
ps. I'm aware of many similar questions here, but they were either outdated, not with proper answers or didn't answer my question, and therefore I choose to ask again.
I just contacted the founder of Utilify (Krishna Nadiminti) and while active development has paused for now, he has kindly released all the source code here on Bitbucket.
I think it is worth continuing this project as there are literally no comparable alternative as of now (even commercial). I may start working on it but don't wait for me :).
Got same problem. I tried NGrid, Alchemi and MS
PI.net.
After all i decided to start my own open source project to play around, check here: http://lucygrid.codeplex.com/.
UPDATE:
See how looks PI example:
The function passed to AsParallelGrid will be executed by the grid nodes.
You can play with it running the DEMO project.
/// <summary>
/// Distributes simple const processing
/// </summary>
class PICalculation : AbstractDemo
{
public int Steps = 100000;
public int ChunkSize = 50;
public PICalculation()
{
}
override
public string Info()
{
return "Calculates PI over the grid.";
}
override
public string Run(bool enableLocalProcessing)
{
double sum = 0.0;
double step = 1.0 / (double)Steps;
/* ORIGINAL VERSION
object obj = new object();
Parallel.ForEach(
Partitioner.Create(0, Steps),
() => 0.0,
(range, state, partial) =>
{
for (long i = range.Item1; i < range.Item2; i++)
{
double x = (i - 0.5) * step;
partial += 4.0 / (1.0 + x * x);
}
return partial;
},
partial => { lock (obj) sum += partial; });
*/
sum = Enumerable
.Range(0, Steps)
// Create bucket
.GroupBy(s => s / 50)
// Local variable initialization is not distributed over the grid
.Select(i => new
{
Item1 = i.First(),
Item2 = i.Last() + 1, // Inclusive
Step = step
})
.AsParallelGrid(data =>
{
double partial = 0;
for (var i = data.Item1; i != data.Item2 ; ++i)
{
double x = (i - 0.5) * data.Step;
partial += (double)(4.0 / (1.0 + x * x));
}
return partial;
}, new GridSettings()
{
EnableLocalProcessing = enableLocalProcessing
})
.Sum() * step;
return sum.ToString();
}
}

C# regarding a parameter and running program

I hope somebody can be of assistance, thank you in advance.
I am using C# to generate some simulation models for evaluating the medium stationary time of a request in a system and the degree of usage of that serving station from the system.
I am using a function to generate the required numbers:
public double genTP(double miu)
{
Random random = new Random();
double u, x;
u = (double)random.NextDouble();
x = (-1 / miu) * Math.Log(1 - u);
return x;
}
This is the main:
Program p1 = new Program();
double NS = 1000000;
double lambda = 4;
double miu = 10;
double STP = 0;
double STS = 0;
double STL = 0;
double i = 1;
double Ta = 0;
double Tp = 0;
double Dis = 0;
do
{
Tp = p1.genTP(miu);
STP += Tp;
STS += Ta + Tp;
Dis = p1.genDIS(lambda);
if (Dis < Ta + Tp)
{
Ta = Ta + Tp - Dis;
}
else
{
STL += Dis - (Ta + Tp);
Ta = 0;
}
i++;
} while (i <= NS);
Console.WriteLine(STS / NS);
Console.WriteLine((STP/(STP+STL))*100);
1) The medium stationary time (which is r) returned is wrong, I get values like 0.09.. but I should get something like ~0.1665.. The algorithm is ok, I am 100% sure of that, I tried something like that in Matlab and it was good. Also the degree of usage (the last line) returned is ok (around ~39.89), only the r is wrong. Could it be a problem with the function, especially the random function that should generate a number?
2)Regarding my function genTP, if I change the parameter from double to int, then it returns 0 at the end. I used debugger to check why is that, and I saw that when the method calculates the value of x with (-1 / miu), this returns 0 automatically, I have tried to cast to double but with no result. I was thinking that this could be a source of the problem.
You're creating a new instance of Random each time you call genTP. If this is called multiple times in quick succession (as it is) then it will use the same seed each time. See my article on random numbers for more information.
Create a single instance of Random and pass it into the method:
private static double GenerateTP(Random random, double miu)
{
double u = random.NextDouble();
return (-1 / miu) * Math.Log(1 - u);
}
And...
Random random = new Random();
do
{
double tp = GenerateTP(random, miu);
...
}
A few more suggestions:
Declare your variables at the point of first use, with minimal scope
Follow .NET naming conventions
Don't make methods instance methods if they don't use any state
I prefer to create a static random field in the calculation class
static Random random = new Random();
Now I can use it without thinking of quick succession and I don't need to give it as a function parameter (I'm not trying to say it works faster but just similar to mathematical notation)
Regarding your second question, it is because the compiler does an integer division because they are two integers.
int / int = int (and the result is truncated).
If any of the args are floating point types, the operation is promoted to a floating point division. If you change your arg to an int, you should either use -1d (a double with value -1), or cast 'miu' to a double before usage:
x = (-1d / miu) ...
or
x = (-1 / (double)miu) ...
Or, of course, use them both as doubles. I prefer the first way.

Categories

Resources