I'm working on a translation of some C# code to C++. Since it's the first time I work with C++ there are some things I don't understand.
The structure of the original code in C# is:
public static Func<int, int, double> MyFunc(double InVar1, double InVar2)
{
return (FunVar1, FunVar2) =>
{
double Sum = FunVar1 + FunVar2:
double Prod = Sum * InVar1 * InVar2;
return Prod;
};
}
The code I tried to replicate in C++ is:
std::function<double(int, int)> MyFunc(double InVar1, double InVar2)
{
return [InVar1,InVar1](int FunVar1, int FunVar2)
{
double Sum = FunVar1 + FunVar2:
double Prod = Sum * InVar1 * InVar2;
return Prod;
};
}
First of all I'm not sure if the C++ structure replicates the C# one.
After that I'm getting an error on the capture-list:
C++ no suitable user-defined conversion from "type" to "[...]" exists
I also tried to put in the capture-list:
[]
[&]
[=]
But none of them worked.
Code looks ok to me, except that
you're capturing InVar1 twice.
at the end of double Sum = FunVar1 + FunVar2: you have a colon instead of a semicolon.
If I fix these errors the code compiles for me:
#include <functional>
std::function<double(int, int)> MyFunc(double InVar1, double InVar2)
{
return [InVar1,InVar2](int FunVar1, int FunVar2)
{
double Sum = FunVar1 + FunVar2;
double Prod = Sum * InVar1 * InVar2;
return Prod;
};
}
int main()
{
auto f = MyFunc(1.0, 2.0);
}
EDIT: On Compiler Explorer: https://godbolt.org/z/h1YKsPev4
Related
I am trying to use the L-BFGS solver in Accord.net maths package in C#.
However, I cannot find how to define the starting value of the optimization.
How can we define it ?
According to official examples, the following syntax defines the initial value of x in the optimization process. However it does not work properly in the following example - as if another starting point was used by the algorithm.
//Target function to minimize;
public double f(double[] x) {
double z = Math.Cos(x[0])-0.2*x[0] + x[1] * x[1]; //Function with multiple local minima : x ~ { (2n+1)pi , 0 }
return z;
}
//Gradient
public double[] g(double[] x) {
double[] grad = {-Math.Sin(x[0])-0.2 , 2 * x[1]};
return grad;
}
double[] x = {3*3.141592,0}; // Starting value (local minimum, -2.88)
var lbfgs = new BroydenFletcherGoldfarbShanno(numberOfVariables: 2, function: f, gradient: g);
bool success = lbfgs.Minimize();
double minValue = lbfgs.Value;
double[] solution = lbfgs.Solution; // {3.34,0} This solution is a local min that has a higher value (-1.65) than the local min next to which we started !!
The syntax is simply:
lbfgs.Minimize(x);
Thank you "500 - Internal Server Error" !
I am trying to get Max value from List<>, but it's returning rounded value to integer. Is there some special way how to proceed this?
private List<double> dataX = new List<double>();
double maxVal = dataX.Max<double>();
Debug.WriteLine("max: " + maxVal);
Edit:
As requested here is feeding data:
for (int i = 0; i < 10; i++)
{
data.Add(new ChartData(i, rand.NextDouble() * 10));
Debug.WriteLine(data.Last<ChartData>().Y);
}
My debug window shows this:
5,9358753151893
7,87125875608588
3,77212246589927
9,36056426230844
2,27154730924943
9,80201833872218
5,7350595275569
3,04650606729393
5,81677517658881
0,0514464220271662
max: 8
So I don't think the feeding side is wrong. And for whole picture, here you can see ChartData type:
public class ChartData
{
public double X { get; set; }
public double Y { get; set; }
public ChartData(double X, double Y)
{
this.X = X;
this.Y = Y;
}
}
And how I'm getting simple List from my ChartData class:
private List<ChartData> data = new List<ChartData>();
private List<double> dataX = new List<double>();
void updateMaxMin()
{
dataX.Clear();
dataY.Clear();
for (int i = 0; i < data.Count - 1; i++)
{
dataX.Add(data[i].X);
dataY.Add(data[i].Y);
}
}
There are two likely scenarios here.
You are rounding the values as you enter them into the list (as #sam mentioned in his comment).
You are expecting a double value ending in 0 to show these decimal places. A double will always drop off the insignificant digits. So for example, 1.500 will be truncated to 1.5. This is how doubles were intended to work. Another article that briefly talks about this is Double Skips last decimal if zero. If you are looking for a different Visual output, I would recommend converting the result to a string and then using string formatting. An example would be the following (using 2 decimal places):
Console.WriteLine(string.Format("max: {0:0.00}", maxVal));
Most likely the problem is in the way you insert into the list as some had suggested in here (you mentioned about rounded to an integer, so I'm assuming it is probably not visual display related).
Try debug your data in the list:
private List<double> dataX = new List<double>();
...
foreach(var data in dataX)
{
Debug.WriteLine("data: " + data);
}
double maxVal = dataX.Max<double>();
Debug.WriteLine("max: " + maxVal);
A possible issue with the way you populate the list could be something like:
var myNum = new[] { 1, 2, 3, 4, 5, 6 };
foreach (var num in myNum)
{
dataX.Add(num / 2);
}
The data that was added into the dataX is actually an integer (as the division by 2 returns an integer).
double doesn't keep insignificant digits. - there's no difference between 9 and 9.0 and 9.0000
If you want just display purpose use refer this link C# Double - ToString() formatting with two decimal places but no rounding for convert the double to string with decimal places. but,if you using calculation it's no need for keeping with decimal places.
Ok I found mistake. I have been calculating max value from different array. So i got correct max value anyway. There should be:
double maxVal = dataY.Max<double>();
instead of
double maxVal = dataX.Max<double>();
So I guess, this isn't much helping, so I will delete this question after you realized I did basic fault.
Thank you all anyway.
I'm trying to implement logistic regression by myself writing code in C#. I found a library (Accord.NET) that I use to minimize the cost function. However I'm always getting different minimums. Therefore I think something may be wrong in the cost function that I wrote.
static double costfunction(double[] thetas)
{
int i = 0;
double sum = 0;
double[][] theta_matrix_transposed = MatrixCreate(1, thetas.Length);
while(i!=thetas.Length) { theta_matrix_transposed[0][i] = thetas[i]; i++;}
i = 0;
while (i != m) // m is the number of examples
{
int z = 0;
double[][] x_matrix = MatrixCreate(thetas.Length, 1);
while (z != thetas.Length) { x_matrix[z][0] = x[z][i]; z++; } //Put values from the training set to the matrix
double p = MatrixProduct(theta_matrix_transposed, x_matrix)[0][0];
sum += y[i] * Math.Log(sigmoid(p)) + (1 - y[i]) * Math.Log(1 - sigmoid(p));
i++;
}
double value = (-1 / m) * sum;
return value;
}
static double sigmoid(double z)
{
return 1 / (1 + Math.Exp(-z));
}
x is a list of lists that represent the training set, one list for each feature. What's wrong with the code? Why am I getting different results every time I run the L-BFGS? Thank you for your patience, I'm just getting started with machine learning!
That is very common with these optimization algorithms - the minima you arrive at depends on your weight initialization. The fact that you are getting different minimums doesn't necessarily mean something is wrong with your implementation. Instead, check your gradients to make sure they are correct using the finite differences method, and also look at your train/validation/test accuracy to see if they are also acceptable.
I hope somebody can be of assistance, thank you in advance.
I am using C# to generate some simulation models for evaluating the medium stationary time of a request in a system and the degree of usage of that serving station from the system.
I am using a function to generate the required numbers:
public double genTP(double miu)
{
Random random = new Random();
double u, x;
u = (double)random.NextDouble();
x = (-1 / miu) * Math.Log(1 - u);
return x;
}
This is the main:
Program p1 = new Program();
double NS = 1000000;
double lambda = 4;
double miu = 10;
double STP = 0;
double STS = 0;
double STL = 0;
double i = 1;
double Ta = 0;
double Tp = 0;
double Dis = 0;
do
{
Tp = p1.genTP(miu);
STP += Tp;
STS += Ta + Tp;
Dis = p1.genDIS(lambda);
if (Dis < Ta + Tp)
{
Ta = Ta + Tp - Dis;
}
else
{
STL += Dis - (Ta + Tp);
Ta = 0;
}
i++;
} while (i <= NS);
Console.WriteLine(STS / NS);
Console.WriteLine((STP/(STP+STL))*100);
1) The medium stationary time (which is r) returned is wrong, I get values like 0.09.. but I should get something like ~0.1665.. The algorithm is ok, I am 100% sure of that, I tried something like that in Matlab and it was good. Also the degree of usage (the last line) returned is ok (around ~39.89), only the r is wrong. Could it be a problem with the function, especially the random function that should generate a number?
2)Regarding my function genTP, if I change the parameter from double to int, then it returns 0 at the end. I used debugger to check why is that, and I saw that when the method calculates the value of x with (-1 / miu), this returns 0 automatically, I have tried to cast to double but with no result. I was thinking that this could be a source of the problem.
You're creating a new instance of Random each time you call genTP. If this is called multiple times in quick succession (as it is) then it will use the same seed each time. See my article on random numbers for more information.
Create a single instance of Random and pass it into the method:
private static double GenerateTP(Random random, double miu)
{
double u = random.NextDouble();
return (-1 / miu) * Math.Log(1 - u);
}
And...
Random random = new Random();
do
{
double tp = GenerateTP(random, miu);
...
}
A few more suggestions:
Declare your variables at the point of first use, with minimal scope
Follow .NET naming conventions
Don't make methods instance methods if they don't use any state
I prefer to create a static random field in the calculation class
static Random random = new Random();
Now I can use it without thinking of quick succession and I don't need to give it as a function parameter (I'm not trying to say it works faster but just similar to mathematical notation)
Regarding your second question, it is because the compiler does an integer division because they are two integers.
int / int = int (and the result is truncated).
If any of the args are floating point types, the operation is promoted to a floating point division. If you change your arg to an int, you should either use -1d (a double with value -1), or cast 'miu' to a double before usage:
x = (-1d / miu) ...
or
x = (-1 / (double)miu) ...
Or, of course, use them both as doubles. I prefer the first way.
Suppose I have written such a class (number of functions doesn't really matter, but in real, there will be somewhere about 3 or 4).
private class ReallyWeird
{
int y;
Func<double, double> f1;
Func<double, double> f2;
Func<double, double> f3;
public ReallyWeird()
{
this.y = 10;
this.f1 = (x => 25 * x + y);
this.f2 = (x => f1(x) + y * f1(x));
this.f3 = (x => Math.Log(f2(x) + f1(x)));
}
public double CalculusMaster(double x)
{
return f3(x) + f2(x);
}
}
I wonder if the C# compiler can optimize such a code so that it won't go through numerous stack calls.
Is it able to inline delegates at compile-time at all? If yes, on which conditions and to which limits? If no, is there an answer why?
Another question, maybe even more important: will it be significantly slower than if I had declared f1, f2 and f3 as methods?
I ask this because I want to keep my code as DRY as possible, so I want to implement a static class which extends the basic random number generator (RNG) functionality: its methods accept one delegate (e.g. from method NextInt() of the RNG) and returning another Func delegate (e.g. for generating ulongs), built on top of the former. and as long as there are many different RNG's which can generate ints, I prefer not to think about implementing all the same extended functionality ten times in different places.
So, this operation may be performed several times (i.e. initial method of the class may be 'wrapped' by a delegate twice or even three times). I wonder what will be the performance overhead like.
Thank you!
If you use Expression Trees instead of complete Func<> the compiler will be able to optimize the expressions.
Edit To clarify, note that I'm not saying the runtime would optimize the expression tree itself (it shouldn't) but rather that since the resulting Expression<> tree is .Compile()d in one step, the JIT engine will simply see the repeated subexpressions and be able to optimize, consolidate, substitue, shortcut and what else it does normally.
(I'm not absolutely sure that it does on all platforms, but at least it should be able to fully leverage the JIT engine)
Comment response
First, expression trees has in potential shall equal execution speed as Func<> (however a Func<> will not have the same runtime cost - JITting probably takes place while jitting the enclosing scope; in case of ngen, it will even be AOT, as opposed to an expresseion tree)
Second: I agree that Expression Trees can be hard to use. See here for a famous simple example of how to compose expressions. However, more complicated examples are pretty hard to come by. If I've got the time I'll see whether I can come up with a PoC and see what MS.Net and MONO actually generate in MSIL for these cases.
Third: don't forget Henk Holterman is probably right in saying this is premature optimization (although composing Expression<> instead of Func<> ahead of time just adds the flexibility)
Lastly, when you really think about driving this very far, you might consider using Compiler As A Service (which Mono already has, I believe it is still upcoming for Microsoft?).
I would not expect a compiler to optimize this. The complications (because of the delegates) would be huge.
And I would not worry about a few stack-frames here either. With 25 * x + y the stack+call overhead could be significant but call a few other methods (PRNG) and the part you are focusing on here becomes very marginal.
I compiled a quick test application where I compared the delegate approach to an approach where I defined each calculation as a function.
When doing 10.000.000 calculations for each version I got the following results:
Running using delegates: 920 ms average
Running using regular method calls: 730 ms average
So while there is a difference it is not very large and probably negligible.
Now, there may be an error in my calculations so I am adding the entire code below. I compiled it in release mode in Visual Studio 2010:
class Program
{
const int num = 10000000;
static void Main(string[] args)
{
for (int run = 1; run <= 5; run++)
{
Console.WriteLine("Run " + run);
RunTest1();
RunTest2();
}
Console.ReadLine();
}
static void RunTest1()
{
Console.WriteLine("Test1");
var t = new Test1();
var sw = Stopwatch.StartNew();
double x = 0;
for (var i = 0; i < num; i++)
{
t.CalculusMaster(x);
x += 1.0;
}
sw.Stop();
Console.WriteLine("Total time for " + num + " iterations: " + sw.ElapsedMilliseconds + " ms");
}
static void RunTest2()
{
Console.WriteLine("Test2");
var t = new Test2();
var sw = Stopwatch.StartNew();
double x = 0;
for (var i = 0; i < num; i++)
{
t.CalculusMaster(x);
x += 1.0;
}
sw.Stop();
Console.WriteLine("Total time for " + num + " iterations: " + sw.ElapsedMilliseconds + " ms");
}
}
class Test1
{
int y;
Func<double, double> f1;
Func<double, double> f2;
Func<double, double> f3;
public Test1()
{
this.y = 10;
this.f1 = (x => 25 * x + y);
this.f2 = (x => f1(x) + y * f1(x));
this.f3 = (x => Math.Log(f2(x) + f1(x)));
}
public double CalculusMaster(double x)
{
return f3(x) + f2(x);
}
}
class Test2
{
int y;
public Test2()
{
this.y = 10;
}
private double f1(double x)
{
return 25 * x + y;
}
private double f2(double x)
{
return f1(x) + y * f1(x);
}
private double f3(double x)
{
return Math.Log(f2(x) + f1(x));
}
public double CalculusMaster(double x)
{
return f3(x) + f2(x);
}
}