This is a piece of my code, which calculate the differentiate. It works correctly but it takes a lot (because of height and width).
"Data" is a grey image bitmap.
"Filter" is [3,3] matrix.
"fh" and "fw" maximum values are 3.
I am looking to speed up this code.
I also tried with using parallel, for but it didn't work correct (error with out of bounds).
private float[,] Differentiate(int[,] Data, int[,] Filter)
{
int i, j, k, l, Fh, Fw;
Fw = Filter.GetLength(0);
Fh = Filter.GetLength(1);
float sum = 0;
float[,] Output = new float[Width, Height];
for (i = Fw / 2; i <= (Width - Fw / 2) - 1; i++)
{
for (j = Fh / 2; j <= (Height - Fh / 2) - 1; j++)
{
sum=0;
for(k = -Fw/2; k <= Fw/2; k++)
{
for(l = -Fh/2; l <= Fh/2; l++)
{
sum = sum + Data[i+k, j+l] * Filter[Fw/2+k, Fh/2+l];
}
}
Output[i,j] = sum;
}
}
return Output;
}
For parallel execution you need to drop c language like variable declaration at the beginning of method and declare them in actual scope that they are used so they are not shared between threads. Making it parallel should provide some benefit for performance, but making them all ParallerFors is not a good idea as there is a limit for threads amount that actually can run in parallel. I would try to make it with top level loop only:
private static float[,] Differentiate(int[,] Data, int[,] Filter)
{
var Fw = Filter.GetLength(0);
var Fh = Filter.GetLength(1);
float[,] Output = new float[Width, Height];
Parallel.For(Fw / 2, Width - Fw / 2 - 1, (i, state) =>
{
for (var j = Fh / 2; j <= (Height - Fh / 2) - 1; j++)
{
var sum = 0;
for (var k = -Fw / 2; k <= Fw / 2; k++)
{
for (var l = -Fh / 2; l <= Fh / 2; l++)
{
sum = sum + Data[i + k, j + l] * Filter[Fw / 2 + k, Fh / 2 + l];
}
}
Output[i, j] = sum;
}
});
return Output;
}
This is a perfect example of a task where using the GPU is better than using the CPU. A GPU is able to perform trillions of floating point operations per second (TFlops), while CPU performance is still measured in GFlops. The catch is that it's only any good if you use SIMD instructions (Single Instruction Multiple Data). The GPU excels at data-parallel tasks. If different data needs different instructions, using the GPU has no advantage.
In your program, the elements of your bitmap go through the same calculations: the same computations just with slightly different data (SIMD!). So using the GPU is a great option. This won't be too complex because with your calculations threads on the GPU would not need to exchange information, nor would they be dependent on results of previous iterations (Each element would be processed by a different thread on the GPU).
You can use, for example, OpenCL to easily access the GPU. More on OpenCL and using the GPU here: https://www.codeproject.com/Articles/502829/GPGPU-image-processing-basics-using-OpenCL-NET
Related
So I converted a recursive function to iterative and then used Parallel.ForEach but when I was running it through VTune it was only really using 2 logical cores at for the majority of its run time.
I decided to attempt to use managed threads instead, and converted this code:
for (int N = 2; N <= length; N <<= 1)
{
int maxThreads = 4;
var workGroup = Enumerable.Range(0, maxThreads);
Parallel.ForEach(workGroup, i =>
{
for (int j = ((i / maxThreads) * length); j < (((i + 1) / maxThreads) * length); j += N)
{
for (int k = 0; k < N / 2; k++)
{
int evenIndex = j + k;
int oddIndex = j + k + (N / 2);
var even = output[evenIndex];
var odd = output[oddIndex];
output[evenIndex] = even + odd * twiddles[k * (length / N)];
output[oddIndex] = even + odd * twiddles[(k + (N / 2)) * (length / N)];
}
}
});
}
Into this:
for (int N = 2; N <= length; N <<= 1)
{
int maxThreads = 4;
Thread one = new Thread(() => calculateChunk(0, maxThreads, length, N, output));
Thread two = new Thread(() => calculateChunk(1, maxThreads, length, N, output));
Thread three = new Thread(() => calculateChunk(2, maxThreads, length, N, output));
Thread four = new Thread(() => calculateChunk(3, maxThreads, length, N, output));
one.Start();
two.Start();
three.Start();
four.Start();
}
public void calculateChunk(int i, int maxThreads, int length, int N, Complex[] output)
{
for (int j = ((i / maxThreads) * length); j < (((i + 1) / maxThreads) * length); j += N)
{
for (int k = 0; k < N / 2; k++)
{
int evenIndex = j + k;
int oddIndex = j + k + (N / 2);
var even = output[evenIndex];
var odd = output[oddIndex];
output[evenIndex] = even + odd * twiddles[k * (length / N)];
output[oddIndex] = even + odd * twiddles[(k + (N / 2)) * (length / N)];
}
}
}
The issue is in the fourth thread on the last iteration of the N loop I get a index out of bounds exception for the output array where the index is attempting access the equivalent of the length.
I can not pinpoint the cause using debugging, but I believe it is to do with the threads, I ran the code without the threads and it worked as intended.
If any of the code needs changing let me know, I usually have a few people suggest edits. Thanks for your help, I have tried to sort it myself and am fairly certain the problem is occurring in my threading but I can not see how.
PS: The intended purpose is to parallelize this segment of code.
The observed behaviour is almost certainly due to the use of a captured loop iteration variable N. I can reproduce your situation with a simple test:
ConcurrentBag<int> numbers = new ConcurrentBag<int>();
for (int i = 0; i < 10000; i++)
{
Thread t = new Thread(() => numbers.Add(i));
t.Start();
//t.Join(); // Uncomment this to get expected behaviour.
}
// You'd not expect this assert to be true, but most of the time it will be.
Assert.True(numbers.Contains(10000));
Put simply, your for loop is racing to increment N before the value of N can be copied by the delegate that executes the calculateChunk call. As a result calculateChunk sees almost random values of N going up to (and including) length <<= 1 - that's what's causing your IndexOutOfRangeException.
The output values you'll get will be rubbish too as you can never rely on the value of N being correct.
If you want to safely rewrite the original code to utilize more cores, move Parallel.ForEach from the inner loop to the outer loop. If the number of outer loop iterations is high, the load balancer will be able to do its job properly (which it can't with your current workGroup count of 4 - that number of elements is simply too low).
I'm trying to translate a bit of C++ into C# and I can't seem to do it without losing a ton of performance due to the loss of speed in looking up and accessing array elements. I'm using 3d jagged arrays because that was the most intuitive to me at the time but I'm very open to alternate suggestions. And so my question, is there a way to access some kind of collection in the same way, or a similar way, as array pointers do? Here's a bit of the C++ I was converting:
void Upsample(float* from, float* to, int n, int stride)
{
float* p, pCoeffs[4] = { 0.25, 0.75, 0.75, 0.25 };
p = &pCoeffs[2];
for (int i = 0; i < n; i++)
{
to[i * stride] = 0;
for (int k = i / 2; k <= i / 2 + 1; k++)
to[i * stride] += p[i - 2 * k] * from[Mod(k, n / 2) * stride];
}
}
////
for (iy=0; iy<n; iy++) for (iz=0; iz<n; iz++) {
Upsample( &noise[i], &temp1[i], n, 1 );
}
When I using Math.Exp() in C# I have some questions?This code is about Kernel density estimation, and I don't have any knowledge about kernel density estimation. So I look up some wiki and some paper.
I try to write it by C#. The problem is when "distance" is getting higher the result is become 0. It's confuse me and I cannot find any other way to get the right result.
disExp = Math.Pow(Math.E, -(distance / 2 * Math.Pow(h, 2)));
So, can any one help me to get the solution? Or give me some idea about Kernel density estimation on C#. Sorry for poor English.
Try this
public static double[,] KernelDensityEstimation(double[] data, double sigma, int nsteps)
{
// probability density function (PDF) signal analysis
// Works like ksdensity in mathlab.
// KDE performs kernel density estimation (KDE)on one - dimensional data
// http://en.wikipedia.org/wiki/Kernel_density_estimation
// Input: -data: input data, one-dimensional
// -sigma: bandwidth(sometimes called "h")
// -nsteps: optional number of abscis points.If nsteps is an
// array, the abscis points will be taken directly from it. (default 100)
// Output: -x: equispaced abscis points
// -y: estimates of p(x)
// This function is part of the Kernel Methods Toolbox(KMBOX) for MATLAB.
// http://sourceforge.net/p/kmbox
// Converted to C# code by ksandric
double[,] result = new double[nsteps, 2];
double[] x = new double[nsteps], y = new double[nsteps];
double MAX = Double.MinValue, MIN = Double.MaxValue;
int N = data.Length; // number of data points
// Find MIN MAX values in data
for (int i = 0; i < N; i++)
{
if (MAX < data[i])
{
MAX = data[i];
}
if (MIN > data[i])
{
MIN = data[i];
}
}
// Like MATLAB linspace(MIN, MAX, nsteps);
x[0] = MIN;
for (int i = 1; i < nsteps; i++)
{
x[i] = x[i - 1] + ((MAX - MIN) / nsteps);
}
// kernel density estimation
double c = 1.0 / (Math.Sqrt(2 * Math.PI * sigma * sigma));
for (int i = 0; i < N; i++)
{
for (int j = 0; j < nsteps; j++)
{
y[j] = y[j] + 1.0 / N * c * Math.Exp(-(data[i] - x[j]) * (data[i] - x[j]) / (2 * sigma * sigma));
}
}
// compilation of the X,Y to result. Good for creating plot(x, y)
for (int i = 0; i < nsteps; i++)
{
result[i, 0] = x[i];
result[i, 1] = y[i];
}
return result;
}
kernel density estimation C#
plot
Lately I've implemented my own neural network (using different guides, but mainly from here), for future use (I intend to use it for an OCR program i'l develop). currently I'm testing it, and I'm having this weird problem.
Whenever I give my network a training example, the algorithm changes the weights in a way that leads to the desired output. However, after a few training examples, the weights get messed up- making the network work well for some outputs, and making it wrong for other outputs (even if I enter the input of the training examples, exactly as it was).
I would appreciate if someone directed me towards the problem, should they see it.
Here are the methods for calculating the error of the neurons and the weight adjusting-
private static void UpdateOutputLayerDelta(NeuralNetwork Network, List<double> ExpectedOutputs)
{
for (int i = 0; i < Network.OutputLayer.Neurons.Count; i++)
{
double NeuronOutput = Network.OutputLayer.Neurons[i].Output;
Network.OutputLayer.Neurons[i].ErrorFactor = ExpectedOutputs[i]-NeuronOutput; //calculating the error factor
Network.OutputLayer.Neurons[i].Delta = NeuronOutput * (1 - NeuronOutput) * Network.OutputLayer.Neurons[i].ErrorFactor; //calculating the neuron's delta
}
}
//step 3 method
private static void UpdateNetworkDelta(NeuralNetwork Network)
{
NeuronLayer UpperLayer = Network.OutputLayer;
for (int i = Network.HiddenLayers.Count - 1; i >= 0; i--)
{
foreach (Neuron LowerLayerNeuron in Network.HiddenLayers[i].Neurons)
{
for (int j = 0; j < UpperLayer.Neurons.Count; j++)
{
Neuron UpperLayerNeuron = UpperLayer.Neurons[j];
LowerLayerNeuron.ErrorFactor += UpperLayerNeuron.Delta * UpperLayerNeuron.Weights[j + 1]/*+1 because of bias*/;
}
LowerLayerNeuron.Delta = LowerLayerNeuron.Output * (1 - LowerLayerNeuron.Output) * LowerLayerNeuron.ErrorFactor;
}
UpperLayer = Network.HiddenLayers[i];
}
}
//step 4 method
private static void AdjustWeights(NeuralNetwork Network, List<double> NetworkInputs)
{
//Adjusting the weights of the hidden layers
List<double> LowerLayerOutputs = new List<double>(NetworkInputs);
for (int i = 0; i < Network.HiddenLayers.Count; i++)
{
foreach (Neuron UpperLayerNeuron in Network.HiddenLayers[i].Neurons)
{
UpperLayerNeuron.Weights[0] += -LearningRate * UpperLayerNeuron.Delta;
for (int j = 1; j < UpperLayerNeuron.Weights.Count; j++)
UpperLayerNeuron.Weights[j] += -LearningRate * UpperLayerNeuron.Delta * LowerLayerOutputs[j - 1] /*-1 because of bias*/;
}
LowerLayerOutputs = Network.HiddenLayers[i].GetLayerOutputs();
}
//Adjusting the weight of the output layer
foreach (Neuron OutputNeuron in Network.OutputLayer.Neurons)
{
OutputNeuron.Weights[0] += -LearningRate * OutputNeuron.Delta * 1; //updating the bias - TODO: change this if the bias is also changed throughout the program
for (int j = 1; j < OutputNeuron.Weights.Count; j++)
OutputNeuron.Weights[j] += -LearningRate * OutputNeuron.Delta * LowerLayerOutputs[j - 1];
}
}
The learning rate is 0.5, and the neurons' activation function is a sigmoid function.
EDIT: I've noticed I never implemented the function to calculate the overall error: E=0.5 * Sum(t-y) for each training example. could that be the problem? and if so, how should I fix it?
The learning rate 0.5 seems a bit too large. Usually values closer to 0.01 or 0.1 are used. Also, it usually helps in convergence if training patterns are presented in random order. More useful hints can be found here: Neural Network FAQ (comp.ai.neural archive).
I am having a brain freeze and just cant figure out a solution to this problem.
I created a class called CustomSet that contained a string list. The class holding a reference to CustomSet stores it as a list.
public class CustomSet : IEnumerable<string>
{
public string Name { get; set; }
internal IList<string> elements;
public CustomSet(string name)
{
this.Name = name;
this.elements = new List<string>();
}
public IEnumerable<string> Elements
{
get
{
return elements;
}
}
public IEnumerator<string> GetEnumerator()
{
return elements.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}
So what I would like to do is iterate over this list of custom sets to out put a 2d string array where the columns are the number of customSet(s) and the rows are a multiplication of the customSet elements.
As an example, if there were 3 custom sets in the list: 1st had 3 elements, 2nd had 2 elements and 3rd had 3 elements. I would want to output 3 columns and 18 rows (3*2*3). The following code is an attempt at the solution:
CustomSet motion = new CustomSet("Motion");
motion.Elements.Add("low");
motion.Elements.Add("medium");
motion.Elements.Add("high");
CustomSet speed = new CustomSet("Speed");
speed.Elements.Add("slow");
speed.Elements.Add("Fast");
CustomSet mass = new CustomSet("Mass");
mass.Elements.Add("light");
mass.Elements.Add("medium");
mass.Elements.Add("heavy");
List<CustomSet> aSet = new List<CustomSet>();
aSet.Add(motion);
aSet.Add(speed);
aSet.Add(mass);
//problem code
int rows = 1;
for(int i = 0; i < aSet.Count; i++)
{
rows *= aSet[i].Elements.Count;
}
string[,] array = new String[aSet.Count, rows];
int modulus;
for (int i = 0; i < aSet.Count; i++)
{
for (int j = 0; j < rows; j++)
{
modulus = j % aSet[i].Elements.Count;
array[i, j] = aSet[i].Elements[modulus];
}
}
for (int j = 0; j < rows; j++)
{
for (int i = 0; i < aSet.Count; i++)
{
Console.Write(array[i, j] + " / ");
}
Console.WriteLine();
}
//end
Console.ReadLine();
However, the code does not output the correct string array (though it is close). What I would like it to out put is the following:
low / slow / light /
low / slow / medium /
low / slow / heavy /
low / Fast / light /
low / Fast / medium /
low / Fast / heavy /
medium / slow / light /
medium / slow / medium /
medium / slow / heavy /
medium / Fast / light /
medium / Fast / medium /
medium / Fast / heavy /
high / slow / light /
high / slow / medium /
high / slow / heavy /
high / Fast / light /
high / Fast / medium /
high / Fast / heavy /
Now the variables in this problem is the number of customSets in the list and the number of elements in each CustomSet.
You can get the product in one go:
var crossJoin = from m in motion
from s in speed
from ms in mass
select new { Motion = m, Speed = s, Mass = ms };
foreach (var val in crossJoin)
{
Console.Write("{0} / {1} / {2}", val.Motion, val.Speed, val.Mass);
}
Now, since you don't know the number of lists, you need to do some more. Eric Lippert covers this in this article, where you can use the CertesianProduct function defined there in the following manner:
var cProduct = SomeContainerClass.CartesianProduct(aSet.Select(m => m.Elements));
var stringsToOutput = cProduct.Select(l => string.Join(" / ", l));
This recursive approach shows the results as desired, with a list of n CustomSet objects:
void OutputSets(List<CustomSet> aSet, int setIndex, string hirarchyString)
{
string ouputString = hirarchyString;
int nextIndex = setIndex + 1;
foreach (string element in aSet[setIndex].Elements)
{
if (nextIndex < aSet.Count)
{
OutputSets(aSet, nextIndex, hirarchyString + element + " / ");
}
else
{
Console.WriteLine(ouputString + element + " / ");
}
}
}
Call it with:
OutputSets(aSet, 0, "");