I am trying to convert a recursive algorithm from CPU to GPU using ALEA Library. I get the following errors if I build the code :
"Fody/Alea.CUDA: AOTCompileServer exited unexpectly with exit code -1073741571"
public class GPUModule : ILGPUModule
{
public GPUModule (GPUModuleTarget target) : base(target)
{
}
[Kernel] //Same Error whether RecursionTest is another Kernel or not.
public void RecursionTest(deviceptr<int> a)
{
...
RecursionTest(a);
}
[Kernel]
public MyKernel(deviceptr<int> a, ...)
{
...
var a = __shared__.Array<int>(10);
RecursionTest(Intrinsic.__array_to_ptr<int>(a)); //Error here
}
...
}
I will appreciate if you provide any documentation or link for recursion examples in C# using ALEA Library.
Thanks in advance
You are using Alea GPU 2.x, the newest version is Alea GPU 3.x. (see www.aleagpu.com). With 3.0, I made a test and it works:
using Alea;
using Alea.CSharp;
using NUnit.Framework;
public static void RecursionTestFunc(deviceptr<int> a)
{
if (a[0] == 0)
{
a[0] = -1;
}
else
{
a[0] -= 1;
RecursionTestFunc(a);
}
}
public static void RecursionTestKernel(int[] a)
{
var tid = threadIdx.x;
var ptr = DeviceFunction.AddressOfArray(a);
ptr += tid;
RecursionTestFunc(ptr);
}
[Test]
public static void RecursionTest()
{
var gpu = Gpu.Default;
var host = new[] {1, 2, 3, 4, 5};
var length = host.Length;
var dev = gpu.Allocate(host);
gpu.Launch(RecursionTestKernel, new LaunchParam(1, length), dev);
var actual = Gpu.CopyToHost(dev);
var expected = new[] {-1, -1, -1, -1, -1};
Assert.AreEqual(expected, actual);
Gpu.Free(dev);
}
Related
Edit: Changed System.Array.Clear(new[] {1,2,3}, 0, 2); to System.Array.Clear(numbers, 0, 2); but get the output [0, z0, z3] and was expecting [0,0,3]
I'm learning about C# and currently learning about arrays and Clear(). When trying to see what happens when using Clear(), I get this output:
I don't understand why this happens. Wasn't it supposed to be [0,0,3]?
My code looks like this:
Program.cs
namespace Introduction
{
internal class Program
{
/* MAIN FUNCTION */
public static void Main()
{
// RunControlFlow();
RunArrays();
}
/* ARRAYS */
public static void RunArrays()
{
// Var declaration
var array = new Arrays.Array();
array.Manipulation();
}
}
}
Arrays.cs
using System;
namespace Introduction.Arrays
{
public class Array
{
public void Manipulation()
{
var numbers = new[] {1, 2, 3};
System.Array.Clear(numbers, 0, 2);
Console.WriteLine("Manipulation | Clearing from index 0 to 2: [{0}]", string.Join(",z", numbers));
}
}
}
You are not passing the numbers array to the Clear method, you are creating a new array that has the same elements, but it's a completely different reference and has nothing to do with numbers. That is why the values in numbers stays unchanged:
Array.Clear(numbers, 0, 2);
I've been recently working on finding more than just the optimal route using Google's OR-Tools. I have found an example in the repo, but this only solves for the optimal route, any idea how to generate more than just one solution for a set of points? I'm currently working with the DotNet version of the tool, any solution with any other language would be helpful!
public class tspParams : NodeEvaluator2
{
public static int[,] distanceMatrix =
{
{ 0, 20, 40, 10 },
{ 20, 0, 4, 55 },
{ 40, 4, 0, 2 },
{ 10, 55, 2, 0 }
};
public static int tsp_size
{
get { return distanceMatrix.GetUpperBound(0) + 1; }
}
public static int num_routes
{
get { return 1; }
}
public static int depot
{
get { return 0; }
}
public override long Run(int FromNode, int ToNode)
{
return distanceMatrix[FromNode, ToNode];
}
}
public class TSP
{
public static void PrintSolution(RoutingModel routing, Assignment solution)
{
Console.WriteLine("Distance of the route: {0}", solution.ObjectiveValue());
var index = routing.Start(0);
Console.WriteLine("Route for Vehicle 0:");
while (!routing.IsEnd(index))
{
Console.Write("{0} -> ", routing.IndexToNode(index));
var previousIndex = index;
index = solution.Value(routing.NextVar(index));
}
Console.WriteLine("{0}", routing.IndexToNode(index));
//Console.WriteLine("Calculated optimal route!");
}
public static void Solve()
{
// Create Routing Model
RoutingModel routing = new RoutingModel(
tspParams.tsp_size,
tspParams.num_routes,
tspParams.depot);
// Define weight of each edge
NodeEvaluator2 distanceEvaluator = new tspParams();
//protect callbacks from the GC
GC.KeepAlive(distanceEvaluator);
routing.SetArcCostEvaluatorOfAllVehicles(distanceEvaluator);
// Setting first solution heuristic (cheapest addition).
RoutingSearchParameters searchParameters = RoutingModel.DefaultSearchParameters();
searchParameters.FirstSolutionStrategy = FirstSolutionStrategy.Types.Value.PathCheapestArc;
Assignment solution = routing.SolveWithParameters(searchParameters);
PrintSolution(routing, solution);
}
}
Use AllSolutionCollector from the underlying CP solver. python code:
solver = routing.solver()
collector = solver.AllSolutionCollector()
for location_idx in range(len(data['time_windows'])):
index = manager.NodeToIndex(location_idx)
time_var = time_dimension.CumulVar(index)
next_var = routing.NextVar(index)
collector.Add(time_var)
collector.Add(next_var)
for v in range(data['num_vehicles']):
index = routing.Start(v)
time_var = time_dimension.CumulVar(index)
next_var = routing.NextVar(index)
collector.Add(time_var)
collector.Add(next_var)
index = routing.End(v)
time_var = time_dimension.CumulVar(index)
collector.Add(time_var)
routing.AddSearchMonitor(collector)
assignment = routing.SolveFromAssignmentWithParameters(initial_solution, search_parameters)
if assignment:
logger.info("solution count: %d", collector.SolutionCount())
for index in range(collector.SolutionCount()):
logger.info("solution index: %d", index)
self.print_solution(data, manager, routing, collector.Solution(index))
logger.info('final solution:')
self.print_solution(data, manager, routing, assignment)
else:
raise OptimizationInternalException("no solution found")
I was trying to translate this python code for a Neural Network
https://gist.github.com/miloharper/c5db6590f26d99ab2670#file-main-py
in C#. I'm using the Math.Net Numerics for the matrixes and here is the code I've made so far in C#
using System;
using MathNet.Numerics.LinearAlgebra;
using MathNet.Numerics;
using MathNet.Numerics.LinearAlgebra.Double;
namespace NeuralNetwork
{
class Program
{
static void Main(string[] args)
{
NeuralNetwork NN = new NeuralNetwork();
Console.WriteLine("Random starting synaptic weights: ");
Console.WriteLine(NN.SynapticWeights);
Matrix<double> TrainingSetInput = DenseMatrix.OfArray(new double[,] { { 0, 0, 1 }, { 1, 1, 1 }, { 1, 0, 1 }, { 0, 1, 1 } });
Matrix<double> TrainingSetOutput = DenseMatrix.OfArray(new double[,] { { 0, 1, 1, 0 } }).Transpose();
NN.Train(TrainingSetInput, TrainingSetOutput, 10000);
Console.WriteLine("New synaptic weights after training: ");
Console.WriteLine(NN.SynapticWeights);
Console.WriteLine("Considering new situation {1, 0, 0} -> ?: ");
Console.WriteLine(NN.Think(DenseMatrix.OfArray(new double[,] { { 1, 0, 0 } })));
Console.ReadLine();
}
}
class NeuralNetwork
{
private Matrix<double> _SynapticWeights;
public NeuralNetwork()
{
_SynapticWeights = 2 * Matrix<double>.Build.Random(3, 1) - 1;
}
private Matrix<double> Sigmoid(Matrix<double> Input)
{
return 1 / (1 + Matrix<double>.Exp(-Input));
}
private Matrix<double> SigmoidDerivative(Matrix<double> Input)
{
return Input * (1 - Input); //NEW Exception here
}
public Matrix<double> Think(Matrix<double> Input)
{
return Sigmoid((Input * _SynapticWeights)); //Exception here (Solved)
}
public void Train(Matrix<double> TrainingInput, Matrix<double> TrainingOutput, int TrainingIterations)
{
for (int Index = 0; Index < TrainingIterations; Index++)
{
Matrix<double> Output = Think(TrainingInput);
Matrix<double> Error = TrainingOutput - Output;
Matrix<double> Adjustment = Matrix<double>.op_DotMultiply(TrainingInput.Transpose(), Error * SigmoidDerivative(Output));
_SynapticWeights += Adjustment;
}
}
public Matrix<double> SynapticWeights { get { return _SynapticWeights; } set { _SynapticWeights = value; } }
}
}
When I execute it, it shows an exception on line 53 (there is a comment at that line in the code). It says:
Matrix dimensions must agree: op1 is 4x3, op2 is 3x1
Did I copy it wrong or is it a problem with the MAth.Net library?
Thanks in advance :D
As far as I can see, the problem not in the library code. You are trying to perform an element-wise matrix multiplication with the use of op_DotMultiply function (line 53). In this case it is obvious from the error message that matrices cannot be multiplied due to difference in their size: the first one is 4x3, the second one is 3x1.
I can suggest to look at the size of these matrices: TrainingSetInput and _SynapticWeights (look at function Train, you are calling Think inside of it with the wrong sized matrices). Check if they are generated correctly. And also think if you really need an element-wise matrix multiplication or a usual multiplication.
If you need more info about matrix multiplications, you can use these links:
Element-wise:
https://en.wikipedia.org/wiki/Hadamard_product_(matrices)
Usual:
https://en.wikipedia.org/wiki/Matrix_multiplication
I am using NeuronDotNet for neural networks in C#. In order to test the network (as well as train it), I wrote my own function to get the sum squared error. However, when I tested this function by running it on the training data and comparing it to the MeanSquaredError of the Backpropagation network, the results were different.
I discovered the reason for the different error is that the network is returning different outputs when I run to when its run in the learning phase. I run it for each TrainingSample using:
double[] output = xorNetwork.Run(sample.InputVector);
In the learning phase its using:
xorNetwork.Learn(trainingSet, cycles);
...with a delegate to trap the end sample event:
xorNetwork.EndSampleEvent +=
delegate(object network, TrainingSampleEventArgs args)
{
double[] test = xorNetwork.OutputLayer.GetOutput();
debug.addSampleOutput(test);
};
I tried doing this using the XOR problem, to keep it simple, and the outputs are still different. For example, at the end of the first epoch, the outputs from the EndSampleEvent delegate vs those from my function are:
Input: 01, Expected: 1, my_function: 0.703332, EndSampleEvent 0.734385
Input: 00, Expected: 0, my_function: 0.632568, EndSampleEvent 0.649198
Input: 10, Expected: 1, my_function: 0.650141, EndSampleEvent 0.710484
Input: 11, Expected: 0, my_function: 0.715175, EndSampleEvent 0.647102
Error: my_function: 0.280508, EndSampleEvent 0.291236
Its not something as simple as it being captured at a different phase in the epoch, the outputs are not identical to those in the next/previous epoch.
I've tried debugging, but I am not an expert in Visual Studio and I'm struggling a bit with this. My project references the NeuronDotNet DLL. When I put breakpoints into my code, it won't step into the code from the DLL. I've looked elsewhere for advice on this and tried several solutions and got nowhere.
I don't think its due to the 'observer effect', i.e. the Run method in my function causing the network to change. I have examined the code (in the project that makes the DLL) and I don't think Run changes any of the weights. The errors from my function tend to be lower than those from the EndSampleEvent by a factor which exceeds the decrease in error from a typical epoch, i.e. its as if the network is getting ahead of itself (in terms of training) temporarily during my code.
Neural networks are stochastic in the sense that they adjust their functions during training. However, the output should be deterministic. Why is it that the two sets of outputs are different?
EDIT: Here is the code I am using.
/***********************************************************************************************
COPYRIGHT 2008 Vijeth D
This file is part of NeuronDotNet XOR Sample.
(Project Website : http://neurondotnet.freehostia.com)
NeuronDotNet is a free software. You can redistribute it and/or modify it under the terms of
the GNU General Public License as published by the Free Software Foundation, either version 3
of the License, or (at your option) any later version.
NeuronDotNet is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with NeuronDotNet.
If not, see <http://www.gnu.org/licenses/>.
***********************************************************************************************/
using System;
using System.Collections.Generic;
using System.Drawing;
using System.IO;
using System.Text;
using System.Windows.Forms;
using NeuronDotNet.Core;
using NeuronDotNet.Core.Backpropagation;
using ZedGraph;
namespace NeuronDotNet.Samples.XorSample
{
public partial class MainForm : Form
{
private BackpropagationNetwork xorNetwork;
private double[] errorList;
private int cycles = 5000;
private int neuronCount = 3;
private double learningRate = 0.25d;
public MainForm()
{
InitializeComponent();
}
private void Train(object sender, EventArgs e)
{
EnableControls(false);
if (!int.TryParse(txtCycles.Text.Trim(), out cycles)) { cycles = 5000; }
if (!double.TryParse(txtLearningRate.Text.Trim(), out learningRate)) { learningRate = 0.25d; }
if (!int.TryParse(txtNeuronCount.Text.Trim(), out neuronCount)) { neuronCount = 3; }
if (cycles < 1) { cycles = 1; }
if (learningRate < 0.01) { learningRate = 0.01; }
if (neuronCount < 1) { neuronCount = 1; }
txtNeuronCount.Text = neuronCount.ToString();
txtCycles.Text = cycles.ToString();
txtLearningRate.Text = learningRate.ToString();
errorList = new double[cycles];
InitGraph();
LinearLayer inputLayer = new LinearLayer(2);
SigmoidLayer hiddenLayer = new SigmoidLayer(neuronCount);
SigmoidLayer outputLayer = new SigmoidLayer(1);
new BackpropagationConnector(inputLayer, hiddenLayer);
new BackpropagationConnector(hiddenLayer, outputLayer);
xorNetwork = new BackpropagationNetwork(inputLayer, outputLayer);
xorNetwork.SetLearningRate(learningRate);
TrainingSet trainingSet = new TrainingSet(2, 1);
trainingSet.Add(new TrainingSample(new double[2] { 0d, 0d }, new double[1] { 0d }));
trainingSet.Add(new TrainingSample(new double[2] { 0d, 1d }, new double[1] { 1d }));
trainingSet.Add(new TrainingSample(new double[2] { 1d, 0d }, new double[1] { 1d }));
trainingSet.Add(new TrainingSample(new double[2] { 1d, 1d }, new double[1] { 0d }));
Console.WriteLine("mse_begin,mse_end,output,outputs,myerror");
double max = 0d;
Console.WriteLine(NNDebug.Header);
List < NNDebug > debugList = new List<NNDebug>();
NNDebug debug = null;
xorNetwork.BeginEpochEvent +=
delegate(object network, TrainingEpochEventArgs args)
{
debug = new NNDebug(trainingSet);
};
xorNetwork.EndSampleEvent +=
delegate(object network, TrainingSampleEventArgs args)
{
double[] test = xorNetwork.OutputLayer.GetOutput();
debug.addSampleOutput(args.TrainingSample, test);
};
xorNetwork.EndEpochEvent +=
delegate(object network, TrainingEpochEventArgs args)
{
errorList[args.TrainingIteration] = xorNetwork.MeanSquaredError;
debug.setMSE(xorNetwork.MeanSquaredError);
double[] test = xorNetwork.OutputLayer.GetOutput();
GetError(trainingSet, debug);
max = Math.Max(max, xorNetwork.MeanSquaredError);
progressBar.Value = (int)(args.TrainingIteration * 100d / cycles);
//Console.WriteLine(debug);
debugList.Add(debug);
};
xorNetwork.Learn(trainingSet, cycles);
double[] indices = new double[cycles];
for (int i = 0; i < cycles; i++) { indices[i] = i; }
lblTrainErrorVal.Text = xorNetwork.MeanSquaredError.ToString("0.000000");
LineItem errorCurve = new LineItem("Error Dynamics", indices, errorList, Color.Tomato, SymbolType.None, 1.5f);
errorGraph.GraphPane.YAxis.Scale.Max = max;
errorGraph.GraphPane.CurveList.Add(errorCurve);
errorGraph.Invalidate();
writeOut(debugList);
EnableControls(true);
}
private const String pathFileName = "C:\\Temp\\NDN_Debug_Output.txt";
private void writeOut(IEnumerable<NNDebug> data)
{
using (StreamWriter streamWriter = new StreamWriter(pathFileName))
{
streamWriter.WriteLine(NNDebug.Header);
//write results to a file for each load combination
foreach (NNDebug debug in data)
{
streamWriter.WriteLine(debug);
}
}
}
private void GetError(TrainingSet trainingSet, NNDebug debug)
{
double total = 0;
foreach (TrainingSample sample in trainingSet.TrainingSamples)
{
double[] output = xorNetwork.Run(sample.InputVector);
double[] expected = sample.OutputVector;
debug.addOutput(sample, output);
int len = output.Length;
for (int i = 0; i < len; i++)
{
double error = output[i] - expected[i];
total += (error * error);
}
}
total = total / trainingSet.TrainingSampleCount;
debug.setMyError(total);
}
private class NNDebug
{
public const String Header = "output(00->0),output(01->1),output(10->1),output(11->0),mse,my_output(00->0),my_output(01->1),my_output(10->1),my_output(11->0),my_error";
public double MyErrorAtEndOfEpoch;
public double MeanSquaredError;
public double[][] OutputAtEndOfEpoch;
public double[][] SampleOutput;
private readonly List<TrainingSample> samples;
public NNDebug(TrainingSet trainingSet)
{
samples =new List<TrainingSample>(trainingSet.TrainingSamples);
SampleOutput = new double[samples.Count][];
OutputAtEndOfEpoch = new double[samples.Count][];
}
public void addSampleOutput(TrainingSample mySample, double[] output)
{
int index = samples.IndexOf(mySample);
SampleOutput[index] = output;
}
public void addOutput(TrainingSample mySample, double[] output)
{
int index = samples.IndexOf(mySample);
OutputAtEndOfEpoch[index] = output;
}
public void setMyError(double error)
{
MyErrorAtEndOfEpoch = error;
}
public void setMSE(double mse)
{
this.MeanSquaredError = mse;
}
public override string ToString()
{
StringBuilder sb = new StringBuilder();
foreach (double[] arr in SampleOutput)
{
writeOut(arr, sb);
sb.Append(',');
}
sb.Append(Math.Round(MeanSquaredError,6));
sb.Append(',');
foreach (double[] arr in OutputAtEndOfEpoch)
{
writeOut(arr, sb);
sb.Append(',');
}
sb.Append(Math.Round(MyErrorAtEndOfEpoch,6));
return sb.ToString();
}
}
private static void writeOut(double[] arr, StringBuilder sb)
{
bool first = true;
foreach (double d in arr)
{
if (first)
{
first = false;
}
else
{
sb.Append(',');
}
sb.Append(Math.Round(d, 6));
}
}
private void EnableControls(bool enabled)
{
btnTrain.Enabled = enabled;
txtCycles.Enabled = enabled;
txtNeuronCount.Enabled = enabled;
txtLearningRate.Enabled = enabled;
progressBar.Value = 0;
btnTest.Enabled = enabled;
txtTestInput.Enabled = enabled;
}
private void LoadForm(object sender, EventArgs e)
{
InitGraph();
txtCycles.Text = cycles.ToString();
txtLearningRate.Text = learningRate.ToString();
txtNeuronCount.Text = neuronCount.ToString();
}
private void InitGraph()
{
GraphPane pane = errorGraph.GraphPane;
pane.Chart.Fill = new Fill(Color.AntiqueWhite, Color.Honeydew, -45F);
pane.Title.Text = "Back Propagation Training - Error Graph";
pane.XAxis.Title.Text = "Training Iteration";
pane.YAxis.Title.Text = "Sum Squared Error";
pane.XAxis.MajorGrid.IsVisible = true;
pane.YAxis.MajorGrid.IsVisible = true;
pane.YAxis.MajorGrid.Color = Color.LightGray;
pane.XAxis.MajorGrid.Color = Color.LightGray;
pane.XAxis.Scale.Max = cycles;
pane.XAxis.Scale.Min = 0;
pane.YAxis.Scale.Min = 0;
pane.CurveList.Clear();
pane.Legend.IsVisible = false;
pane.AxisChange();
errorGraph.Invalidate();
}
private void Test(object sender, EventArgs e)
{
if (xorNetwork != null)
{
lblTestOutput.Text = xorNetwork.Run(
new double[] {double.Parse(txtTestInput.Text.Substring(2,4)),
double.Parse(txtTestInput.Text.Substring(8,4))})[0].ToString("0.000000");
}
}
}
}
Its not to do with normalisation, as the mapping between the two sets of outputs is not monotonic. For example, the output in {0,1} is higher in EndSampleEvent but in {1,1} it is lower. Normalisation would be a simple linear function.
Its not to do with jitter either, as I've tried turning that off, and the results are still different.
I have received an answer from my professor. The problem lies in the LearnSample method from the BackpropagationNetwork class which is called for each training sample every iteration.
The order of relevant events in this method is ….
1) Add to the MeanSquaredError which is calculated using only the output layer and desired output
2) Backpropagate errors to all earlier layer; this has no effect on the network.
3) Finally recalculate biases for each layer; this affects the network.
(3) is the last thing that occurs in the LearnSample method and happens after the calculation of the output error for each training instance. For the XOR example this means that the network is changed 4 times from the state is was in when the MSE calculation was made.
In theory, if you want to compare training and test errors then you should do a manual calculation (like my GetError function) and run it twice: once for each data set. However, in reality it might not be necessary to go to all this trouble as the values are not that different.
I have a class called Matrix : IEnumerable<double>, (classic, mathematical matrix. It is basically a 2D array with some goodies).
The class is immutable, so there is no way to change its values after instance creation.
If want to create a matrix with pre-existing values I have to pass an array to the constructor like this:
double[,] arr = new double[,]
{
{1,2,3}, {4,5,6}, {7,8,9}
};
Matrix m = new Matrix(arr);
Is there a way to turn it into this: (?)
Matrix m = new Matrix
{
{1,2,3}, {4,5,6}, {7,8,9}
};
Update:
Found a hack-ish way yo make it work. I'm not sure if this solution is advisable, but it works.
class Matrix : ICloneable, IEnumerable<double>
{
// Height & Width are initialized in a constructor beforehand.
/*
* Usage:
* var mat = new Matrix(3, 4)
* {
* {1,2,3}, {4,5,6}, {7,8,9}, {10,11,12}
* };
*/
int rowIndex;
bool allowArrayInitializer;
double[,] tempData;
double[,] data;
public void Add(params double[] args)
{
if(!allowArrayInitializer)
throw new InvalidOperationException("Cannot use array initializer");
if(args.Length != Width || rowIndex >= Height)
throw new InvalidOperationException("Invalid array initializer.");
for(int i = 0; i < Width; i++)
tempData[i, rowIndex] = args[i];
if(++rowIndex == Height)
data = tempData;
}
}
Not if it’s immutable; that syntactic sugar is implemented using Add().
You will not able to do it via initializer but should be able to do it via parameterized constructor.
You can see the sample code below :
class Matrix : IEnumerable<double>
{
double[,] input;
public Matrix(double[,] inputArray)
{
input = inputArray;
}
public IEnumerator<double> GetEnumerator()
{
return (IEnumerator<double>)input.GetEnumerator();
}
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
{
return input.GetEnumerator();
}
}
In the main method :
static void Main(string[] args)
{
var m = new Matrix(new double[,] { { 1, 2, 3 }, { 4, 5, 6 }, { 7, 8, 9 } });
}
I hope this helps you!
instead of deriving from IEnumerable I would use a property:
class Matrix
{
public double[,] Arr { get; set; }
}
Matrix m = new Matrix
{
Arr = new double [,] { {1d,2d,3d}, { 4d,5d, 6d}}
};