I'm working on a project in C# that uses Principal Component Analysis to apply feature reduction/dimension reduction on a [,]matrix. The matrix columns are features (words and bigrams) that have been extracted from a set emails. In the beginning we had around 156 emails which resulted in approximately 23000 terms and everything worked as it was supposed to using the following code:
public static double[,] GetPCAComponents(double[,] sourceMatrix, int dimensions = 20, AnalysisMethod method = AnalysisMethod.Center)
{
// Create Principal Component Analysis of a given source
PrincipalComponentAnalysis pca = new PrincipalComponentAnalysis(sourceMatrix, method);
// Compute the Principal Component Analysis
pca.Compute();
// Creates a projection of the information
double[,] pcaComponents = pca.Transform(sourceMatrix, dimensions);
// Return PCA Components
return pcaComponents;
}
The components we received were classified later on using Linear Discriminant Analysis' Classify method from the Accord.NET framework. Everything was working as it should.
Now that we have increased the size of out dataset (1519 emails and 68375 terms) we at first were getting some OutOfMemory Exceptions. We were able to solve this by adjusting some parts of our code until we were able to reach the part where we calculate the PCA components. Right now this takes about 45 minutes which is way too long. After checking the website of Accord.NET on PCA we decided to try and use the last example that uses a covariance matrix since it says: "Some users would like to analyze huge amounts of data. In this case, computing the SVD directly on the data could result in memory exceptions or excessive computing times". Therefore we changed our code to the following:
public static double[,] GetPCAComponents(double[,] sourceMatrix, int dimensions = 20, AnalysisMethod method = AnalysisMethod.Center)
{
// Compute mean vector
double[] mean = Accord.Statistics.Tools.Mean(sourceMatrix);
// Compute Covariance matrix
double[,] covariance = Accord.Statistics.Tools.Covariance(sourceMatrix, mean);
// Create analysis using the covariance matrix
var pca = PrincipalComponentAnalysis.FromCovarianceMatrix(mean, covariance);
// Compute the Principal Component Analysis
pca.Compute();
// Creates a projection of the information
double[,] pcaComponents = pca.Transform(sourceMatrix, dimensions);
// Return PCA Components
return pcaComponents;
}
This however raises an System.OutOfMemoryException. Does anyone know how to solve this problem?
I think parallelizing your solver is the best bet.
Perhaps something like FEAST would help.
http://www.ecs.umass.edu/~polizzi/feast/
Parallel linear algebra for multicore system
The problem is that the code is using jagged matrices instead of multi-dimensional matrices. The point is that double[,] needs a contiguous amount of memory to be allocated, which may be quite hard to find depending on much space you need. If you use jagged matrices, memory allocations are spread out and space is easier to find.
You can avoid this issue by upgrading to the latest version of the framework and using the new API for statistical analysis instead. Instead of passing your source matrix in the constructor and calling .Compute, simply call .Learn() instead:
public static double[][] GetPCAComponents(double[][] sourceMatrix, int dimensions = 20, AnalysisMethod method = AnalysisMethod.Center)
{
// Create Principal Component Analysis of a given source
PrincipalComponentAnalysis pca = new PrincipalComponentAnalysis(method)
{
NumberOfOutputs = dimensions // limit the number of dimensions
};
// Compute the Principal Component Analysis
pca.Learn(sourceMatrix);
// Creates a projection of the information
double[][] pcaComponents = pca.Transform(sourceMatrix);
// Return PCA Components
return pcaComponents;
}
Related
We've got a PRNG in our code base, which we think has a bug in it's method to produce numbers which conform to a given normal distribution.
Is there a C# implementation of a normality test which I can leverage in my unit test suite to assert that this module behaves as desired/expected?
My preference would be something that has a signature like:
bool NormalityTest.IsNormal(IEnumerable<int> samples)
Math.Net has distribution functions and random number sampling. It is probably the most widely used math library, very solid.
http://numerics.mathdotnet.com/
http://mathnetnumerics.codeplex.com/wikipage?title=Probability%20Distributions&referringTitle=Documentation
You can try this: http://accord-framework.net/docs/html/T_Accord_Statistics_Testing_ShapiroWilkTest.htm
Scroll down to the example:
// Let's say we would like to determine whether a set
// of observations come from a normal distribution:
double[] samples =
{
0.11, 7.87, 4.61, 10.14, 7.95, 3.14, 0.46, 4.43,
0.21, 4.75, 0.71, 1.52, 3.24, 0.93, 0.42, 4.97,
9.53, 4.55, 0.47, 6.66
};
// For this, we can use the Shapiro-Wilk test. This test tests the null hypothesis
// that samples come from a Normal distribution, vs. the alternative hypothesis that
// the samples do not come from such distribution. In other words, should this test
// come out significant, it means our samples do not come from a Normal distribution.
// Create a new Shapiro-Wilk test:
var sw = new ShapiroWilkTest(samples);
double W = sw.Statistic; // should be 0.90050
double p = sw.PValue; // should be 0.04209
bool significant = sw.Significant; // should be true
// The test is significant, therefore we should reject the null
// hypothesis that the samples come from a Normal distribution.
I do not know of any c-implementation of testing data for normality. I had the same issue and build my own routine. This web-page helped me a lot. It is written for executing the test in Excel but it gives you all the necessary coefficients (Royston) and the logic.
I have huge transient arrays created rapidly. Some are kept, some are GC-d. This defragments the heap and the app consumes approx. 2.5x more memory than it would truly need resulting OutOfMemoryException.
As a solution, I would prefer to have one gigantic array (PointF[]) and do the allocation and management of segments by myself. But I wonder how I could make two (or more) arrays share the same memory space.
PointF[] giganticList = new PointF[100];
PointF[] segment = ???;
// I want the segment length to be 20 and starting e.g at position 50
// within the gigantic list
I am thinking of a trick like the winner answer of this SO question.
Would that be possible? The problem is that the length and the number of the segment arrays are known only in runtime.
Assuming you are sure that your OutOfMemoryException could be avoided, and your approach of having it all in memory isn't the actual issue (the GC is pretty good at stopping this happening if memory is available) ...
Here is your first problem. I'm not sure the CLR supports any single object larger than 2 GB.
Crucial Edit - gcAllowVeryLargeObjects changes this on 64-bit systems - try this before rolling your own solution.
Secondly you are talking about "some are kept some are GC'd". i.e. you want to be able to reallocate elements of your array once you are done with a "child array".
Thirdly, I'm assuming that the PointF[] giganticList = new PointF[100]; in your question is meant to be more like PointF[] giganticList = new PointF[1000000];?
Also consider using MemoryFailPoint as this allows you to "demand" memory and check for exceptions instead of crashing with OutOfMemoryException.
EDIT Perhaps most importantly you are now entering a land of trade-offs. If you do this you could start losing the advantages of things like the jitter optimising for loops by doing array bound checks at the start of the loop (for (int i= 0; i < myArray.Length; i++)
gets optimised, int length = 5; for (int i= 0; i < length; i++) doesn't). If you have high computation resource code, then this could hurt you. You are also going to have to work far harder to process different child arrays in parallel with each other as well. Creating copies of the child arrays, or sections of them, or even items inside them, is still going to allocate more memory which will be GC'd.
This is possible by wrapping the array, and tracking which sections are used for which child arrays. You are essentially talking about allocating a huge chunk of memory, and then reusing parts of it without putting the onus on the GC. You can take advantage of ArraySegment<T>, but that comes with its own potential issues like exposing the original array to all callers.
This is not going to be simple, but it is possible. Likely as not each time you remove a child array you will want to defragment your master array by shifting other child arrays to close the gaps (or do that when you have run out of contiguous segments).
A simple example would look something like the (untested, don't blame me if your computer leaves home and your cat blows up) pseudocode below. There are two other approaches, I mention those at the end.
public class ArrayCollection {
List<int> startIndexes = new List<int>();
List<int> lengths = new List<int>();
const int 1beeellion = 100;
PointF[] giganticList = new PointF[1beeellion];
public ArraySegment<PointF> this[int childIndex] {
get {
// Care with this method, ArraySegment exposes the original array, which callers could then
// do bad things to
return new ArraySegment<String>(giganticList, startIndexes[childIndex], length[childIndex]);
}}
// returns the index of the child array
public int AddChild(int length) {
// TODO: needs to take account of lists with no entries yet
int startIndex = startIndexes.Last() + lengths.Last();
// TODO: check that startIndex + length is not more than giganticIndex
// If it is then
// find the smallest unused block which is larger than the length requested
// or defrag our unused array sections
// otherwise throw out of memory
startIndexes.Add(startIndex); // will need inserts for defrag operations
lengths.Add(length); // will need inserts for defrag operations
return startIndexes.Count - 1; // inserts will need to return inserted index
}
public ArraySegment<PointF> GetChildAsSegment(int childIndex) {
// Care with this method, ArraySegment exposes the original array, which callers could then
// do bad things to
return new ArraySegment<String>(giganticList, startIndexes[childIndex], length[childIndex]);
}
public void SetChildValue(int childIndex, int elementIndex, PointF value) {
// TODO: needs to take account of lists with no entries yet, or invalid childIndex
// TODO: check and PREVENT buffer overflow (see warning) here and in other methods
// e.g.
if (elementIndex >= lengths[childIndex]) throw new YouAreAnEvilCallerException();
int falseZeroIndex = startIndexes[childIndex];
giganticList[falseZeroIndex + elementIndex];
}
public PointF GetChildValue(int childIndex, int elementIndex) {
// TODO: needs to take account of lists with no entries yet, bad child index, element index
int falseZeroIndex = startIndexes[childIndex];
return giganticList[falseZeroIndex + elementIndex];
}
public void RemoveChildArray(int childIndex) {
startIndexes.RemoveAt(childIndex);
lengths.RemoveAt(childIndex);
// TODO: possibly record the unused segment in another pair of start, length lists
// to allow for defraging in AddChildArray
}
}
Warning The above code effectively introduces buffer overflow vulnerabilities if, for instance, you don't check the requested childIndex against length for the child array in methods like SetChildValue. You must understand this and prevent it before trying to do this in production, especially if combining these approaches with use of unsafe.
Now, this could be extended to return psuedo index public PointF this[int index] methods for child arrays, enumerators for the child arrays etc., but as I say, this is getting complex and you need to decide if it really will solve your problem. Most of your time will be spent on the reuse (first) defrag (second) expand (third) throw OutOfMemory (last) logic.
This approach also has the advantage that you could allocate many 2GB subarrays and use them as a single array, if my comment about the 2GB object limit is correct.
This assumes you don't want to go down the unsafe route and use pointers, but the effect is the same, you would just create a wrapper class to manage child arrays in a fixed block of memory.
Another approach is to use the hashset/dictionary approach. Allocate your entire (massive 2GB array) and break it into chunks (say 100 array elements). A child array will then have multiple chunks allocated to it, and some wasted space in its final chunk. This will have the impact of some wasted space overall (depending on your average "child length vs. chunk length" predictions), but the advantage that you could increase and decrease the size of child arrays, and remove and insert child arrays with less impact on your fragmentation.
Noteworthy References:
Large arrays in 64 bit .NET 4: gcAllowVeryLargeObjects
MemoryFailPoint - allows you to "demand" memory and check for exceptions instead of crashing with OutOfMemoryException after the fact
Large Arrays, and LOH Fragmentation. What is the accepted convention?
3GB process limit on 32 bit, see: 3_GB_barrier, Server Fault /3GB considerations and AWE/PAE
buffer overflow vulnerability, and why you can get this in C#
Other examples of accessing arrays as a different kind of array or structure. The implementations of these might help you develop your own solution
BitArray Class
BitVector32 Structure
NGenerics - clever uses of array like concepts in some of the members of this library, particularly the general structures such as ObjectMatrix and Bag
C# array of objects, very large, looking for a better way
Array optimisation
Eric Gu - Efficiency of iteration over arrays? - note the age of this, but the approach of how to look for JIT optimisation is still relevant even with .NET 4.0 (see Array Bounds Check Elimination in the CLR? for example)
Dave Detlefs - Array Bounds Check Elimination in the CLR
Warning - pdf: Implicit Array Bounds Checking on 64-bit Architectures
LinkedList - allows you to reference multiple disparate array buckets in a sequence (tieing together the chunks in the chunked bucket approach)
Parallel arrays and use of unsafe
Parallel Matrix Multiplication With the Task Parallel Library (TPL), particularly UnsafeSingle - a square or jagged array represented by a single array is the same class of problem you are trying to solve.
buffer overflow vulnerability, and why you can get this in C# (yes I have mentioned this three times now, it is important)
Your best bet here is probably to use multiple ArraySegment<PointF> all on the same PointF[] instance, but at different offsets, and have your calling code take note of the relative .Offset and .Count. Note that you would have to write your own code to allocate the next block, and look for gaps, etc - essentially your own mini-allocator.
You can't treat the segments just as a PointF[] directly.
So:
PointF[] giganticList = new PointF[100];
// I want the segment length to be 20 and starting e.g at position 50
// within the gigantic list
var segment = new ArraySegment<PointF>(giganticList, 50, 20);
As a side note: another approach might be to use a pointer to the data - either from an unmanaged allocation, or from a managed array that has been pinned (note: you should try to avoid pinning), but : while a PointF* can convey its own offset information, it cannot convey length - so you'd need to always pass both a PointF* and Length. By the time you've done that, you might as well have just used ArraySegment<T>, which has the side-benefit of not needing any unsafe code. Of course, depending on the scenario, treating the huge array as unmanaged memory may (in some scenarios) still be tempting.
Program Purpose: Integration. I am implementing an adaptive quadrature (aka numerical integration) algorithm for high dimensions (up to 100). The idea is to randomly break the volume up into smaller sections by evaluating points using a sampling density proportional to an estimate of the error at that point. Early on I "burn-in" a uniform sample, then randomly choose points according to a Gaussian distribution over the estimated error. In a manner similar to simulated annealing, I "lower the temperature" and reduce the standard deviation of my Gaussian as time goes on, so that low-error points initially have a fair chance of being chosen, but later on are chosen with steadily decreasing probability. This enables the program to stumble upon spikes that might be missed due to imperfections in the error function. (My algorithm is similar in spirit to Markov-Chain Monte-Carlo integration.)
Function Characteristics. The function to be integrated is estimated insurance policy loss for multiple buildings due to a natural disaster. Policy functions are not smooth: there are deductibles, maximums, layers (e.g. zero payout up to 1 million dollars loss, 100% payout from 1-2 million dollars, then zero payout above 2 million dollars) and other odd policy terms. This introduces non-linear behavior and functions that have no derivative in numerous planes. On top of the policy function is the damage function, which varies by building type and strength of hurricane and is definitely not bell-shaped.
Problem Context: Error Function. The difficulty is choosing a good error function. For each point I record measures that seem useful for this: the magnitude of the function, how much it changed as a result of a previous measuremnent (a proxy for the first derivative), the volume of the region the point occupies (larger volumes can hide error better), and a geometric factor related to the shape of the region. My error function will be a linear combination of these measures where each measure is assigned a different weight. (If I get poor results, I will contemplate non-linear functions.) To aid me in this effort, I decided to perform an optimization over a wide range of possible values for each weight, hence the Microsoft Solution Foundation.
What to Optimize: Error Rank. My measures are normalized, from zero to one. These error values are progressively revised as the integration proceeds to reflect recent averages for function values, changes, etc. As a result, I am not trying to make a function that yields actual error values, but instead yields a number that sorts the same as the true error, i.e. if all sampled points are sorted by this estimated error value, they should receive a rank similar to the rank they would receive if sorted by the true error.
Not all points are equal. I care very much if the point region with #1 true error is ranked #1000 (or vice versa), but care very little if the #500 point is ranked #1000. My measure of success is to MINIMIZE the sum of the following over many regions at a point partway into the algorithm's execution:
ABS(Log2(trueErrorRank) - Log2(estimatedErrorRank))
For Log2 I am using a function that returns the largest power of two less than or equal to the number. From this definition, come useful results. Swapping #1 and #2 costs a point, but swapping #2 and #3 costs nothing. This has the effect of stratifying points into power of two ranges. Points that are swapped within a range do not add to the function.
How I Evaluate. I have constructed a class called Rank that does this:
Ranks all regions by true error once.
For each separate set of parameterized weights, it computes the
trial (estimated) error for that region.
Sorts the regions by that trial error.
Computes the trial rank for each region.
Adds up the absolute difference of logs of the two ranks and calls
this the value of the parameterization, hence the value to be
minimized.
C# Code. Having done all that, I just need a way to set up Microsoft Solver Foundation to find me the best parameters. The syntax has me stumped. Here is my C# code that I have so far. In it you will see comments for three problems I have identified. Maybe you can spot even more! Any ideas how to make this work?
public void Optimize()
{
// Get the parameters from the GUI and figures out the low and high values for each weight.
ParseParameters();
// Computes the true rank for each region according to true error.
var myRanker = new Rank(ErrorData, false);
// Obtain Microsoft Solver Foundation's core solver object.
var solver = SolverContext.GetContext();
var model = solver.CreateModel();
// Create a delegate that can extract the current value of each solver parameter
// and stuff it in to a double array so we can later use it to call LinearTrial.
Func<Model, double[]> marshalWeights = (Model m) =>
{
var i = 0;
var weights = new double[myRanker.ParameterCount];
foreach (var d in m.Decisions)
{
weights[i] = d.ToDouble();
i++;
}
return weights;
};
// Make a solver decision for each GUI defined parameter.
// Parameters is a Dictionary whose Key is the parameter name, and whose
// value is a Tuple of two doubles, the low and high values for the range.
// All are Real numbers constrained to fall between a defined low and high value.
foreach (var pair in Parameters)
{
// PROBLEM 1! Should I be using Decisions or Parameters here?
var decision = new Decision(Domain.RealRange(ToRational(pair.Value.Item1), ToRational(pair.Value.Item2)), pair.Key);
model.AddDecision(decision);
}
// PROBLEM 2! This calls myRanker.LinearTrial immediately,
// before the Decisions have values. Also, it does not return a Term.
// I want to pass it in a lambda to be evaluated by the solver for each attempted set
// of decision values.
model.AddGoal("Goal", GoalKind.Minimize,
myRanker.LinearTrial(marshalWeights(model), false)
);
// PROBLEM 3! Should I use a directive, like SimplexDirective? What type of solver is best?
var solution = solver.Solve();
var report = solution.GetReport();
foreach (var d in model.Decisions)
{
Debug.WriteLine("Decision " + d.Name + ": " + d.ToDouble());
}
Debug.WriteLine(report);
// Enable/disable buttons.
UpdateButtons();
}
UPDATE: I decided to look for another library as a fallback, and found DotNumerics (http://dotnumerics.com/). Their Nelder-Mead Simplex solver was easy to call:
Simplex simplex = new Simplex()
{
MaxFunEvaluations = 20000,
Tolerance = 0.001
};
int numVariables = Parameters.Count();
OptBoundVariable[] variables = new OptBoundVariable[numVariables];
//Constrained Minimization on the intervals specified by the user, initial Guess = 1;
foreach (var x in Parameters.Select((parameter, index) => new { parameter, index }))
{
variables[x.index] = new OptBoundVariable(x.parameter.Key, 1, x.parameter.Value.Item1, x.parameter.Value.Item2);
}
double[] minimum = simplex.ComputeMin(ObjectiveFunction, variables);
Debug.WriteLine("Simplex Method. Constrained Minimization.");
for (int i = 0; i < minimum.Length; i++)
Debug.WriteLine(Parameters[i].Key + " = " + minimum[i].ToString());
All I needed was to implement ObjectiveFunction as a method taking a double array:
private double ObjectiveFunction(double[] weights)
{
return Ranker.LinearTrial(weights, false);
}
I have not tried it against real data, but I created a simulation in Excel to setup test data and score it. The results coming back from their algorithm were not perfect, but gave a very good solution.
Here's my TL;DR summary: He doesn't know how to minimize the return value of LinearTrial, which takes an array of doubles. Each value in this array has its own min/max value, and he's modeling that using Decisions.
If that's correct, it seems you could just do the following:
double[] minimums = Parameters.Select(p => p.Value.Item1).ToArray();
double[] maximums = Parameters.Select(p => p.Value.Item2).ToArray();
// Some initial values, here it's a quick and dirty average
double[] initials = Parameters.Select(p => (p.Item1 + p.Item2)/2.0).ToArray();
var solution = NelderMeadSolver.Solve(
x => myRanker.LinearTrial(x, false), initials, minimums, maximums);
// Make sure you check solution.Result to ensure that it found a solution.
// For this, I'll assume it did.
// Value 0 is the minimized value of LinearTrial
int i = 1;
foreach (var param in Parameters)
{
Console.WriteLine("{0}: {1}", param.Key, solution.GetValue(i));
i++;
}
The NelderMeadSolver is new in MSF 3.0. The Solve static method "finds the minimum value of the specified function" according to the documentation in the MSF assembly (despite the MSDN documentation being blank and showing the wrong function signature).
Disclaimer: I'm no MSF expert, but the above worked for me and my test goal function (sum the weights).
I have a legacy map viewer application using WinForms. It is sloooooow. (The speed used to be acceptable, but Google Maps, Google Earth came along and users got spoiled. Now I am permitted to make if faster :)
After doing all the obvious speed improvements (caching, parallel execution, not drawing what does not need to be drawn, etc), my profiler shows me that the real choking point is the coordinate transformations when converting points from map-space to screen-space.
Normally a conversion code looks like this:
public Point MapToScreen(PointF input)
{
// Note that North is negative!
var result = new Point(
(int)((input.X - this.currentView.X) * this.Scale),
(int)((input.Y - this.currentView.Y) * this.Scale));
return result;
}
The real implementation is trickier. Latitudes/longitues are represented as integers. To avoid loosing precision, they are multiplied up by 2^20 (~ 1 million). This is how a coordinate is represented.
public struct Position
{
public const int PrecisionCompensationPower = 20;
public const int PrecisionCompensationScale = 1048576; // 2^20
public readonly int LatitudeInt; // North is negative!
public readonly int LongitudeInt;
}
It is important that the possible scale factors are also explicitly bound to power of 2. This allows us to replace the multiplication with a bitshift. So the real algorithm looks like this:
public Point MapToScreen(Position input)
{
Point result = new Point();
result.X = (input.LongitudeInt - this.UpperLeftPosition.LongitudeInt) >>
(Position.PrecisionCompensationPower - this.ZoomLevel);
result.Y = (input.LatitudeInt - this.UpperLeftPosition.LatitudeInt) >>
(Position.PrecisionCompensationPower - this.ZoomLevel);
return result;
}
(UpperLeftPosition representents the upper-left corner of the screen in the map space.)
I am thinking now of offloading this calculation to the GPU. Can anyone show me an example how to do that?
We use .NET4.0, but the code should preferably run on Windows XP, too. Furthermore, libraries under GPL we cannot use.
I suggest you look at using OpenCL and Cloo to do this - take a look at the vector add example and then change this to map the values by using two ComputeBuffers (one for each of LatitudeInt and LongitudeInt in each point) to 2 output ComputeBuffers. I suspect the OpenCL code would looks something like this:
__kernel void CoordTrans(__global int *lat,
__global int *lon,
__constant int ulpLat,
__constant int ulpLon,
__constant int zl,
__global int *outx,
__global int *outy)
{
int i = get_global_id(0);
const int pcp = 20;
outx[i] = (lon[i] - ulpLon) >> (pcp - zl);
outy[i] = (lat[i] - ulpLat) >> (pcp - zl);
}
but you would do more than one coord-transform per core. I need to rush off, I recommend you read up on opencl before doing this.
Also, if the number of coords is reasonable (<100,000/1,000,000) the non-gpu based solution will likely be faster.
Now one year later the problem arose again, and we found a very banal answer. I feel a bit stupid not realizing it earlier. We draw the geographic elements to bitmap via ordinary WinForms GDI. GDI is hardware accelerated. All we have to do is NOT to do the transformation by ourselves but set the scale parameters of System.Drawing.Graphics object:
Graphics.TranslateTransform(...) and Graphics.ScaleTransform(...)
We do not even need the trick with the bit shifting.
:)
I'm coming from a CUDA background, and can only speak for NVIDIA GPUs, but here goes.
The problem with doing this on a GPU is your operation/transfer time.
You have on the order of 1 operation to perform per element. You'd really want to do more than this per element to get a real speed improvement. The bandwidth between global memory and the threads on a GPU is around 100GB/s. So, if you have to load one 4 Byte integer to do one FLOP, you theoretical maximum speed is 100/4 = 25 FLOPS. This is far from the hundreds of FLOPS advertised.
Note this is the theoretical maximum, the real result might be worse. And this is even worse if you're loading more than one element. In your case, it looks like 2, so you might get a maximum of 12.5 FLOPS from it. In practice, it will almost certainly be lower.
If this sounds ok to you though, then go for it!
XNA can be used to do all the transformations you require and gives very good performance. It can also be displayed inside a winforms application: http://create.msdn.com/en-US/education/catalog/sample/winforms_series_1
I spend much of my time programming in R or MATLAB. These languages are typically used for manipulating arrays and matrices, and consequently, they have vectorised operators for addition, equality, etc.
For example, in MATLAB, adding two arrays
[1.2 3.4 5.6] + [9.87 6.54 3.21]
returns an array of the same size
ans =
11.07 9.94 8.81
Switching over to C#, we need a loop, and it feels like a lot of code.
double[] a = { 1.2, 3.4, 5.6 };
double[] b = { 9.87, 6.54, 3.21 };
double[] sum = new double[a.Length];
for (int i = 0; i < a.Length; ++i)
{
sum[i] = a[i] + b[i];
}
How should I implement vectorised operators using C#? These should preferably work for all numeric array types (and bool[]). Working for multidimensional arrays is a bonus.
The first idea I had was to overload the operators for System.Double[], etc. directly. This has a number of problems though. Firstly, it could cause confusion and maintainability issues if built-in classes do not bahave as expected. Secondly, I'm not sure if it is even possible to change the behaviour of these built-in classes.
So my next idea was to derive a class from each numerical type and overload the operators there. This creates the hassle of converting from double[] to MyDoubleArray and back, which reduces the benefit of me doing less typing.
Also, I don't really want to have to repeat a load of almost identical functionality for every numeric type. This lead to my next idea of a generic operator class. In fact, someone else had also had this idea: there's a generic operator class in Jon Skeet's MiscUtil library.
This gives you a method-like prefix syntax for operations, e.g.
double sum = Operator<double>.Add(3.5, -2.44); // 1.06
The trouble is, since the array types don't support addition, you can't just do something like
double[] sum = Operator<double[]>.Add(a, b); // Throws InvalidOperationException
I've run out of ideas. Can you think of anything that will work?
Create a Vector class (actually I'd make it a struct) and overload the arithmentic operators for that class... This has probably been done already if you do a google search, there are numerous hits... Here's one that looks promising Vector class...
To handle vectors of arbitrary dimension, I'd:
design the internal array which would persist the individual floats for each of the
vectors dimension values an array list of arbitrary size,
make the Vector constructor take the dimension as an constructor parameter,
In the arithmentic operator overloads, add a validation that the two vectors being added, or subtracted have the same dimension.
You should probably create a Vector class that internally wraps an array and overloads the arithmetic operators. There's a decent matrix/vector code library here.
But if you really need to operate on naked arrays for some reason, you can use LINQ:
var a1 = new double[] { 0, 1, 2, 3 };
var a2 = new double[] { 10, 20, 30, 40 };
var sum = a1.Zip( a2, (x,y) => Operator<double>.Add( x, y ) ).ToArray();
Take a look at CSML. It's a fairly complete matrix library for c#. I've used it for a few things and it works well.
The XNA Framework has the classes you may be able to use. You can use it in your application like any other part of .NET. Just grab the XNA redistributable and code away.
BTW, you don't need to do anything special (like getting the game studio or joining the creator's club) to use it in your application.