Confused with realization of mathematical formula - c#

Could you please help me with realization of mathematical formula in C#? Here it is:
R(t)=∑((x[i]-M)*(x[i+t]-M))
∑ goes from i=0 to N-t
M = const, t =[0,...,n], x is the vector with random data.
My implementation doesn't work correct and I don't know where the mistake is( I know, that I want you to do it for me, but I don't have anyone else to ask for help( Your help will be very appreciated! Thank you!
There is my code:
for (int i = 0; i < tvect.Lenght; i++)
{
sum[i] = 0;
t = tvect[i];
for (int j = 0; j < (N - t); j++)
{
sum[i] = sum[i] + (data[j] - M) * (data[j + t] - M);
}
}

float[] R(int[] t)
{
float[] sum = new float[t.length];
for (j=0;j<t.length;j++)
{
sum[j] = 0;
for (int i=0; i<N-t[j]; i++)
{
sum[j] += (x[i]-M)*(x[i+t[j]]-M);
}
}
return sum;
}

float sum = 0.0;
for(int j = 0; j < t.Length; j++) {
for(int i = 0; i < N - t[j]; i++) {
sum += (x[i]-M)*(x[i+t[j]]-M);
}
}

Related

How to Break a For Loop From an If Statement Inside the Loop (C#)

I am working on an A* Pathfinding method that uses a custom class instead of nodes, but am having issues with my loops. The first for loop using int i is able to go up to 3 (Player1.instance.movement = 3), but I need to use an if statement inside of that loop to check if the target position has already been found. I am wondering if it is possible to break my for loop when my If statement is false.
public void GetNeighbors(Tile originTile)
{
Tile originalTile = originTile;
nextTile.Clear();
int minX = 0;
int minY = 0;
var originCostFunc = Mathf.Infinity;
for (int i = 0; i < Player1.instance.movement; i++)
{
for (int x = -1; x <= 1; x++)
{
for (int y = -1; y <= 1; y++)
{
if (x != y && y != x)
{
var costX = Mathf.Abs((originTile.transform.position.x + x) - originalTile.transform.position.x);
var costY = Mathf.Abs((originTile.transform.position.y + y) - originalTile.transform.position.y);
var distanceX = Mathf.Abs(targetPos.transform.position.x - (originTile.transform.position.x + x));
var distanceY = Mathf.Abs(targetPos.transform.position.y - (originTile.transform.position.y + y));
var costFunc = costX + costY + distanceX + distanceY;
if (costFunc <= originCostFunc)
{
originCostFunc = costFunc;
minX = x;
minY = y;
Debug.Log($"x: {x}, y: {y}");
}
}
}
}
nextTile.Add(GridManagerHandPlaced.instance.GetTileAtPosition(new Vector2(originTile.transform.position.x + minX, originTile.transform.position.y + minY)));
if (nextTile[i] != targetPos)
{
originTile = nextTile[i];
}
else
{
break;
}
}
DisplayPath();
}
You can break loop several times by condition.
bool breakLoop = false;
for (int i = 0; i < length; i++)
{
for (int j = 0; j < length; j++)
{
for (int k = 0; k < length; k++)
{
breakLoop = nextTile == target;
if (breakLoop)
break;
}
if (breakLoop)
break;
}
if (breakLoop)
break;
}
Or move search logic to separated method and return a value from any number of nested loops
string path = FindPath();
Display(path);
string FindPath()
{
for (int i = 0; i < length; i++)
{
for (int j = 0; j < length; j++)
{
for (int k = 0; k < length; k++)
{
if (nextTile == target)
return nextTile;
}
}
}
return null;
}
Never use goto operator.
This is one of the few valid cases where I'd use goto. In-fact, this is the example given in the docs for when it should be used.
void CheckMatrices(Dictionary<string, int[][]> matrixLookup, int target)
{
foreach (var (key, matrix) in matrixLookup)
{
for (int row = 0; row < matrix.Length; row++)
{
for (int col = 0; col < matrix[row].Length; col++)
{
if (matrix[row][col] == target)
{
goto Found;
}
}
}
Console.WriteLine($"Not found {target} in matrix {key}.");
continue;
Found:
Console.WriteLine($"Found {target} in matrix {key}.");
}
}
Note the syntax for the label is simply myLabel: and you can place it anywhere in procedurally executable code.
For sake of covering other ways of handling this situation, here is the boolean solution.
bool breakLoops = false;
for (int i = 0; i < length1; i++)
{
for (int ii = 0; ii < length2; ii++)
{
for (int iii = 0; iii < length3; iii++)
{
if (breakingCondition)
{
breakLoops = true;
break;
}
}
if (breakLoops) break;
}
if (breakLoops) break;
}
Simple and straightforward, but requires a break condition check at the end of each loop that you want to break out of.

Improved Selection Sort Out of Bounds Error

I have a method that is an "Improved version of Selection Sort". However, the code is not running as " temp[x] = 0;" this line gives an out of bounds of the array error. I do not want to use an ArrayList. How would I change this line to be in bounds of the array?
public static void ImprovedSelectionSort(int[] Array)
{
Stopwatch timer = new Stopwatch();
timer.Start();
int[] temp = new int[Array.Length];
for (int x = 1; x <= Array.Length; x++)
{
//OtherList.Add(0); -- what I want to do
temp[x] = 0;
}
int n = Array.Length;
int i = 0;
while (i < n)
{
int rear = 0;
int curMax = Array[i];
temp[rear] = i;
int j = i + 1;
while (j < n)
{
if (curMax < Array[j])
{
curMax = Array[j];
rear = -1;
}
if (curMax == Array[j])
{
rear = rear + 1;
temp[rear] = j;
}
j++;
}
int front = 0;
int p = Array[temp[front]];
while (front <= rear)
{
int temporary = p;
Array[temp[front]] = Array[i];
Array[i] = temporary;
i++;
front += 1;
}
}
for (int x = 1; x <= Array.Length; x++)
This is most likely the issue. The last index in an array is the length minus 1 (so, a 52-card deck goes from 0..51). Changing the "x <= Array.Length" component to "x < Array.Length" should fix the issue.
Change the <= to < in the for loop. This is a common mistake among novice devs, and not something you should sweat. But it's a good thing to keep in mind for the future.

How should weights be optimized with gradient descent algorithm in order to work?

I have a neural network in visual studio. for the loss function I am using a basic cost function (pred-target)**2 and after I finish an epoch I optimize the parameter functions afterwards, but the algorithm doesn't work.
No matter what is my network configuration, the predictions are not write (it is the same output for all the inputs) and the loss function is not optimized. It stays the same through all the epochs.
void calc_lyr(int x, int y, int idx, float target) // thus function calculates the neuron value based on the previous layer
{
if (x == -1 || y == 0) // if its the first layer, get the data from input nodes
{
for (int i = 0; i < neurons[y]; i++)
{
float sum = 0;
for (int j = 0; j < inputTypes.Count; j++)
{
sum += weights[x+1][j][i] * training_test[idx][j];
}
sum = relu(sum);
vals[y+1][i] = sum;
}
}
else
{
for(int i = 0; i < neurons[y]; i++)
{
float sum = 0;
for(int j = 0; j < neurons[x]; j++)
{
sum += weights[x+1][j][i] * vals[x+1][j] + biases[y][i];
}
sum = relu(sum);
vals[y+1][i] = sum;
}
}
}
void train()
{
log("Proces de antrenare inceput ----------------- " + DateTime.Now.ToString());
vals = new List<List<float>>();
weights = new List<List<List<float>>>();
biases = new List<List<float>>();
Random randB = new Random(DateTime.Now.Millisecond);
Random randW = new Random(DateTime.Now.Millisecond);
for (int i = 0; i <= nrLayers; i++)
{
progressEpochs.Value =(int)(((float)i * (float)nrLayers) / 100.0f);
vals.Add(new List<float>());
weights.Add(new List<List<float>>());
if (i == 0)
{
for (int j = 0; j < inputTypes.Count; j++)
{
vals[i].Add(0);
}
}
else
{
biases.Add(new List<float>());
for (int j = 0; j < neurons[i-1]; j++)
{
vals[i].Add(0);
float valB = (float)randB.NextDouble();
biases[i-1].Add(valB - ((int)valB));
}
}
}
float valLB = (float)randB.NextDouble();
biases.Add(new List<float>());
biases[nrLayers].Add(valLB - ((int)valLB));
for (int i = 0; i <= nrLayers; i++)
{
if (i == 0)
{
for (int j = 0; j < inputTypes.Count; j++)
{
weights[i].Add(new List<float>());
for (int x = 0; x < neurons[i]; x++)
{
float valW = (float)randW.NextDouble();
weights[i][j].Add(valW);
}
}
}
else if (i == nrLayers)
{
for (int j = 0; j < neurons[i-1]; j++) {
weights[i].Add(new List<float>());
weights[i][j].Add(0);
}
}
else
{
for (int j = 0; j < neurons[i - 1]; j++)
{
weights[i].Add(new List<float>());
for (int x = 0; x < neurons[i]; x++)
{
float valW = (float)randW.NextDouble();
weights[i][j].Add(valW);
}
}
}
}
Random rand = new Random(DateTime.Now.Millisecond);
log("\n\n");
for (int i = 0; i < epochs; i++)
{
log("Epoch " + (i + 1).ToString() + " inceput ---> " + DateTime.Now.ToString());
int idx = rand.Next() % training_test.Count;
float target = outputsPossible.IndexOf(training_labels[idx]);
for (int j = 0; j < nrLayers; j++)
{
calc_lyr(j - 1, j, idx, target);
}
float total_val = 0;
for(int x = 0; x < neurons[nrLayers - 1]; x++)
{
float val = relu(weights[nrLayers][x][0] * vals[nrLayers][x] + biases[nrLayers][0]);
total_val += val;
}
total_val = sigmoid(total_val);
float cost_res = cost(total_val, target);
log("Epoch " + (i+1).ToString() + " terminat ----- " + DateTime.Now.ToString() + "\n");
log("Eroare epoch ---> " + (cost_res<1?"0":"") + cost_res.ToString("##.##") + "\n\n\n");
float cost_der = cost_d(total_val, target);
for (int a = 0; a < weights.Count; a++)
{
for (int b = 0; b < weights[a].Count; b++)
{
for (int c = 0; c < weights[a][b].Count; c++)
{
weights[a][b][c]-=cost_der*learning_rate * sigmoid_d(weights[a][b][c]);
}
}
}
for (int a = 0; a < nrLayers; a++)
{
for (int b = 0; b < neurons[a]; b++)
{
biases[a][b] -= cost_der * learning_rate;
}
}
}
hasTrained = true;
testBut.Enabled = hasTrained;
MessageBox.Show("Antrenament complet!");
SavePrompt sp = new SavePrompt();
sp.Show();
}
How can it be changed to optimize the weights, biases and loss function? For now, when I try to debug, the weights are changing, but it is the same value for the loss function.
I solved it by using AForge.NET: link

Genetic algorithm - clustering points on screen

Below is the code I wrote for clustering using genetic algorithm. Points are from a picturebox, generated randomly (X,Y) before calling this class. However, the result of this algorithm is much worse than k-means or lbg I'm comparing it to. Can someone take a look for any errors in the algorithm, maybe I omitted something. Thanks.
I did this using arrays, the 2 other I did using lists, but I don't think that should have any impact on result.
public class geneticAlgorithm
{
static int pom = 0;
static PictureBox pb1;
public geneticAlgorithm(PictureBox pb)
{
pb1 = pb;
}
public static void doGA(PointCollection points, int clusterCounter)
//points is a list of points,
//those point have (X,Y) coordinates generated randomly from pictureBox
//coordinates. clusterCounter is how many clusters I want to divide the points into
{
//this part converts list of points into array testtab,
//where each array field hold X,Y of a point
Point[] test = new Point[points.Count];
test = points.ToArray();
double[][] testtab = new double[test.Length][];
for (int i = 0; i < testtab.GetLength(0); i++)
{
testtab[i] = new double[2];
testtab[i][0] = test[i].X;
testtab[i][1] = test[i].Y;
}
//end of converting
int n = testtab.GetLength(0);
int k = clusterCounter;
int chromosomeCount = 500;
int dimensions = 2;
double[][] testClusters = new double[k][];
for (int i = 0; i < k; i++)
{
testClusters[i] = new double[dimensions];
}
double[][] testChromosomes = new double[chromosomeCount][];
for (int i = 0; i < chromosomeCount; i++)
{
testChromosomes[i] = new double[2 * k];
}
int[][] testChromosomesInt = new int[chromosomeCount][];
for (int i = 0; i < chromosomeCount; i++)
{
testChromosomesInt[i] = new int[2 * k];
}
double[] partner = new double[chromosomeCount];
double[][] roulette = new double[chromosomeCount][];
for (int i = 0; i < chromosomeCount; i++)
{
roulette[i] = new double[1];
}
double[][] errors = new double[chromosomeCount][];
for (int i = 0; i < chromosomeCount; i++)
{
errors[i] = new double[1];
}
double crossingPossibility = 0.01;
double mutationPossibility = 0.0001;
int maxIterations = 10000;
//here I create chromosomes and initial clusters
for (int j = 0; j < chromosomeCount; j++)
{
for (int i = 0; i < k; i++)
{
Random rnd = new Random();
int r = rnd.Next(n);
for (int q = 0; q < dimensions; q++)
{
testClusters[i][q] = testtab[r][q];
}
}
int help = 0;
for (int i = 0; i < k; i++)
for (int l = 0; l < dimensions; l++) // here is creation of chromosome
{
testChromosomes[j][help] = testClusters[i][l];
help++;
}
//end
//here I call accomodation function to see which of them are good
errors[j][0] = accomodationFunction(testClusters, testtab, n, k);
//end
//cleaning of the cluster table
testClusters = new double[k][];
for (int i = 0; i < k; i++)
{
testClusters[i] = new double[dimensions];
}
}
//end
for (int counter = 0; counter < maxIterations; counter++)
{
//start of the roulette
double s = 0.0;
for (int i = 0; i < chromosomeCount; i++)
s += errors[i][0];
for (int i = 0; i < chromosomeCount; i++)
errors[i][0] = chromosomeCount * errors[i][0] / s;
int idx = 0;
for (int i = 0; i < chromosomeCount; i++)
for (int j = 0; i < errors[i][0] && idx < chromosomeCount; j++)
{
roulette[idx++][0] = i;
}
double[][] newTab = new double[chromosomeCount][];
for (int i = 0; i < chromosomeCount; i++)
{
newTab[i] = new double[2 * k];
}
Random rnd = new Random();
for (int i = 0; i < chromosomeCount; i++)
{
int q = rnd.Next(chromosomeCount);
newTab[i] = testChromosomes[(int)roulette[q][0]];
}
testChromosomes = newTab;
//end of roulette
//start of crossing chromosomes
for (int i = 0; i < chromosomeCount; i++)
partner[i] = (rnd.NextDouble() < crossingPossibility + 1) ? rnd.Next(chromosomeCount) : -1;
for (int i = 0; i < chromosomeCount; i++)
if (partner[i] != -1)
testChromosomes[i] = crossing(testChromosomes[i], testChromosomes[(int)partner[i]], rnd.Next(2 * k), k);
//end of crossing
//converting double to int
for (int i = 0; i < chromosomeCount; i++)
for (int j = 0; j < 2 * k; j++)
testChromosomes[i][j] = (int)Math.Round(testChromosomes[i][j]);
//end of conversion
//start of mutation
for (int i = 0; i < chromosomeCount; i++)
if (rnd.NextDouble() < mutationPossibility + 1)
testChromosomesInt[i] = mutation(testChromosomesInt[i], rnd.Next(k * 2), rnd.Next(10));
//end of mutation
}
//painting of the found centre on the picture box
int centrum = max(errors, chromosomeCount);
Graphics g = pb1.CreateGraphics();
SolidBrush brush = new SolidBrush(Color.Red);
for (int i = 0; i < 2 * k - 1; i += 2)
{
g.FillRectangle(brush, testChromosomesInt[centrum][i], testChromosomesInt[centrum][i + 1], 20, 20);
}
return;
}
//end of painting
public static int max(double[][] tab, int chromosomeCount)
{
double max = 0;
int k = 0;
for (int i = 0; i < chromosomeCount; i++)
{
if (max < tab[i][0])
{
max = tab[i][0];
k = i;
}
}
return k;
}
public static int[] mutation(int[] tab, int elem, int bit)
{
int mask = 1;
mask <<= bit;
tab[elem] = tab[elem] ^ mask;
return tab;
}
public static double[] crossing(double[] tab, double[] tab2, int p, int k)
{
double[] hold = new double[2 * k];
for (int i = 0; i < p; i++)
hold[i] = tab[i];
for (int i = p; i < 2 * k; i++)
hold[i] = tab2[i];
return hold;
}
//accomodation function, checks to which centre which point belongs based on distance
private static double accomodationFunction(double[][] klastry, double[][] testtab, int n, int k)
{
double Error = 0;
for (int i = 0; i < n; i++)
{
double min = 0;
int ktory = 0;
for (int j = 0; j < k; j++)
{
double dist = distance(klastry[j], testtab[i]);
if (j == 0)
{
min = dist;
}
if (min > dist)
{
min = dist;
ktory = j;
}
}
Error += min;
}
pom++;
return 1 / Error;
}
public static double distance(double[] tab, double[] tab2)
{
double dist = 0.0;
for (int i = 0; i < tab.GetLength(0); i++)
dist += Math.Pow(tab[i] - tab2[i], 2);
return dist;
}
}
The algorithm should work like so: (excuse me if not the best english)
1. Get random points (let's say 100)
2. Check into how many clusters we want to split them (the same thing I would do using k-means for example
3. Get starting population of chromosomes
4. Throu cutting on the roulette, crossing and mutation pick the best x centres, where x is the amount of clusters we want
5. Paint them.
Here are some results, and why I think it's wrong: (it's using 100 points, 5 clusters)
k-means:
lbg:
genetic(without colors now):
I hope this clarifies a bit.

How can I use hopfield network to learn more patterns?

Is there any relation between number of neurons and ability of Hopfield network to recognize patterns?
I write neural network program in C# to recognize patterns with Hopfield network. My network has 64 neurons. When I train network for 2 patterns, every things work nice and easy, but when I train network for more patterns, Hopfield can't find answer!
So, according to my code, how can I use Hopfield network to learn more patterns?
Should I make changes in this code?
There is my train() function:
public void Train(bool[,] pattern)
{
//N is number of rows in our square matrix
//Convert input pattern to bipolar
int[,] PatternBipolar = new int[N, N];
for (int i = 0; i < N; i++)
for (int j = 0; j < N; j++)
{
if (pattern[i, j] == true)
{
PatternBipolar[i, j] = 1;
}
else
{
PatternBipolar[i, j] = -1;
}
}
//convert to row matrix
int count1 = 0;
int[] RowMatrix = new int[(int)Math.Pow(N, 2)];
for (int j = 0; j < N; j++)
for (int i = 0; i < N; i++)
{
RowMatrix[count1] = PatternBipolar[i, j];
count1++;
}
//convert to column matrix
int count2 = 0;
int[] ColMatrix = new int[(int)Math.Pow(N, 2)];
for (int j = 0; j < N; j++)
for (int i = 0; i < N; i++)
{
ColMatrix[count2] = PatternBipolar[i, j];
count2++;
}
//multiplication
int[,] MultipliedMatrix = new int[(int)Math.Pow(N, 2), (int)Math.Pow(N, 2)];
for (int i = 0; i < (int)Math.Pow(N, 2); i++)
for (int j = 0; j < (int)Math.Pow(N, 2); j++)
{
MultipliedMatrix[i, j] = ColMatrix[i] * RowMatrix[j];
}
//cells in the northwest diagonal get set to zero
for (int i = 0; i < (int)Math.Pow(N, 2); i++)
MultipliedMatrix[i, i] = 0;
// WightMatrix + MultipliedMatrix
for (int i = 0; i < (int)Math.Pow(N, 2); i++)
for (int j = 0; j < (int)Math.Pow(N, 2); j++)
{
WeightMatrix[i, j] += MultipliedMatrix[i, j];
}
And there is Present() function (this function is used to return answer for a given pattern):
public void Present(bool[,] Pattern)
{
int[] output = new int[(int)(int)Math.Pow(N, 2)];
for (int j = 0; j < N; j++)
for (int i = 0; i < N; i++)
{
OutputShowMatrix[i, j] = 0;
}
//convert bool to binary
int[] PatternBinary = new int[(int)Math.Pow(N, 2)];
int count = 0;
for (int j = 0; j < N; j++)
for (int i = 0; i < N; i++)
{
if (Pattern[i, j] == true)
{
PatternBinary[count] = 1;
}
else
{
PatternBinary[count] = 0;
}
count++;
}
count = 0;
int activation = 0;
for (int j = 0; j < (int)Math.Pow(N, 2); j++)
{
for (int i = 0; i < (int)Math.Pow(N, 2); i++)
{
activation = activation + (PatternBinary[i] * WeightMatrix[i, j]);
}
if (activation > 0)
{
output[count] = 1;
}
else
{
output[count] = 0;
}
count++;
activation = 0;
}
count = 0;
for (int j = 0; j < N; j++)
for (int i = 0; i < N; i++)
{
OutputShowMatrix[i, j] = output[count++];
}
In below images I trained Hopfield for characters A and P and when input patterns are like A or P, network recognize them in true way
Then I train it for character C:
This is where every things go wrong!
Now if I enter pattern like C, this issue happen:
And if enter pattern like A, see what happen:
And if train more patterns, whole of grid become black!
I've spotted only one mistake in your code: you perform only one iteration of node value calculation, without verifying if the values have converged. I've fixed this method like this:
public bool[,] Present(bool[,] pattern)
{
bool[,] result = new bool[N, N];
int[] activation = new int[N * N];
int count = 0;
for (int i = 0; i < N; i++)
for (int j = 0; j < N; j++)
{
activation[count++] = pattern[i, j] ? 1 : 0;
}
bool convergence = false;
while (!convergence)
{
convergence = true;
var previousActivation = (int[])activation.Clone();
for (int i = 0; i < N * N; i++)
{
activation[i] = 0;
for (int j = 0; j < N * N; j++)
{
activation[i] += (previousActivation[j] * WeightMatrix[i, j]);
}
activation[i] = activation[i] > 0 ? 1 : 0;
if (activation[i] != previousActivation[i])
{
convergence = false;
}
}
}
count = 0;
for (int i = 0; i < N; i++)
for (int j = 0; j < N; j++)
{
result[i, j] = activation[count++] == 1;
}
return result;
}
This slightly improves the results, however probably should also be improved to calculate the values asynchronously to avoid cycles.
Unfortunately, this still introduces the behaviour you've described. This is results from the phenomena called spurious patterns. For the network to learn more than one pattern consider training it with a Hebb rule. You can read about the spurious patterns, stability and learning of the Hopfield network here and here.

Categories

Resources