I'm looking for a simple function in C# to interpolate my 3D data.
Given is already a list with around 100-150 data sets and 3 double values.
-25.000000 -0.770568 2.444945
-20.000000 -0.726583 2.467809
-15.000000 -0.723274 2.484167
-10.000000 -0.723114 2.506445
and so on...
The chart created by these values looks usually like this, I'm not sure if this counts as scattered or rather still gridded data ...
In the end I want to hand over two double values and get the third then from the interpolation function. It shouldn't flatten the surface, it should still go through all the given data points.
Since I'm not given the time to look into all possible algorithms and lack the mathematical background I'm a bit overwhelmed by all the possibilities that I get thrown at: Kriging, Delauney triangulation, NURBs and many more ...
In addition to that most solutions I found in the net were either for a different language, outdated or are charged by the time (e.g ilnumerics, still not sure if they have the solution)
In matlab there exists a griddata function that does exactly this (and is based on a kriging algorithm as far as I know) but in this case C# is mandatory for me.
Thank you for your help and criticism and suggestions are welcome.
Related
I've been messing around with the aforge time series genetic algorithm sample and I've got my own version working, atm it's just 'predicting' Fibonacci numbers.
The problem is when I ask it to predict new values beyond the array I've given it (which contains the first 21 numbers of the sequence, using a window size of 5) it won't do it, it throws an exception that says "Data size should be enough for window and prediction".
As far as I can tell I'm supposed to decipher the bizarre formula contained in "population.BestChromosome" and use that to extrapolate future values, is that right? Is there an easier way? Am I overlooking something massively obvious?
I'd ask on the aforge forum but the developer is not supporting it anymore.
As far as I can tell I'm supposed to decipher the bizarre formula
contained in "population.BestChromosome" and use that to extrapolate
future values, is that right?
What you call a "bizarre formula" is called a model in data analysis. You learn such a model from past data and you can feed it new data to get a predicted outcome. Whether that new outcome makes sense or is just garbage depends on how general your model is. Many techniques can learn very good models that explain the observed data very well, but which are not generalizable and will return unuseful results when you feed new data into the model. You need to find a model that both explains the given data as well as potentially unobserved data which is a non-trivial process. Usually people estimate the generalization error of that model by splitting the known data into two partitions: one with which the model is learned and another one on which the learned models are tested. You then want to select that model which is accurate on both data. You can also check out the answer I gave on another question here which also treats the topic of machine learning: https://stackoverflow.com/a/3764893/189767
I don't think you're "overlooking something massively obvious", but rather you're faced with a problem that is not trivial to solve.
Btw, you can also use genetic programming (GP) in HeuristicLab. The model of GP is a mathematical formula and in HeuristicLab you can export that model to e.g. MatLab.
Ad Fibonacci, the closed formula for Fibonacci numbers is F(n) = (phi^n - psi^n) / sqrt(5) where phi and psi are special magic numbers according to wikipedia. If you want to find that with GP you need one variable (n), three constants, and the power function. However, it's very likely that you find a vastly different formula that is similar in output. The problem in machine learning is that very different models can produce the same output. The recursive form requires that you include the values of the past two n into the data set. This is similar to learning a model for a time series regression problem.
I am trying to use SVM for News article classification.
I created a table that contains the features (unique words found in the documents) as rows.
I created weight vectors mapping with these features. i.e if the article has a word that is part of the feature vector table that location is marked as 1 or else 0.
Ex:- Training sample generated...
1 1:1 2:1 3:1 4:1 5:1 6:1 7:1 8:1 9:1
10:1 11:1 12:1 13:1 14:1 15:1 16:1
17:1 18:1 19:1 20:1 21:1 22:1 23:1
24:1 25:1 26:1 27:1 28:1 29:1 30:1
As this is the first document all the features are present.
I am using 1, 0 as class labels.
I am using svm.Net for classification.
I gave 300 weight vectors manually classified as training data and the model generated is taking all the vectors as support vectors, which is surely overfitting.
My total features (unique words/row count in feature vector DB table) is 7610.
What could be the reason?
Because of this over fitting my project is now in pretty bad shape. It is classifying every article available as a positive article.
In LibSVM binary classification is there any restriction on the class label?
I am using 0, 1 instead of -1 and +1. Is that a problem?
You need to do some type of parameter search, also if the classes are unbalanced the classifier might get artificially high accuracies without doing much. This guide is good at teaching basic, practical things, you should probably read it
As pointed out, a parameter search is probably a good idea before doing anything else.
I would also investigate the different kernels available to you. The fact that you input data is binary might be problematic for the RBF kernel (or might render it's usage sub-optimal, compared to another kernel). I have no idea which kernel could be better suited, though. Try a linear kernel, and look around for more suggestions/idea :)
For more information and perhaps better answers, look on stats.stackexchange.com.
I would definitely try using -1 and +1 for your labels, that's the standard way to do it.
Also, how much data do you have? Since you're working in 7610-dimensional space, you could potentially have that many support vectors, where a different vector is "supporting" the hyperplane in each dimension.
With that many features, you might want to try some type of feature selection method like principle component analysis.
I am witting a project of image processing.
For some part of my project to find good threshold value I need to find peaks and valleys of image's histogram.
I am witting my project in C# .net
but I need Algorithm or sample code in any languages like(Java, C,C++,....) to understand the logic of that. I can convert to C# by my self.
any document or algorithm or piece of code...
thanks
It's hard to beat Ohtsu's Method for binary thresholding. Even if you insist on implementing local extrema searching by yourself, Ohtsu's method will give you a good result to compare to.
If you already have computed your histogram, to find peaks and valleys is computationally trivial (loop over it and find local extrema). What is not trivial is to find "good" peaks and valleys to do some segmentation/threshold. But that is not a matter of coding, it's a matter of modelling. You can google for it.
If you want a simple recipe, and if you know that your histogram has "essentially" two peaks and a valley in the middle ("bimodal" histogram) and you want to locate that valley, I have once implemented the following ad-hoc procedure, with relative success:
Compute all the extrema of the histogram (relative maxima/minima, including borders)
If there are only two maxima, AND if in between those maxima there is only one local minimum, we've found the valley. Return it.
Else, smooth the histogram (eg. a moving average) and go to first step.
I am dealing with datasets with missing data and need to be able to fill forward, backward, and gaps. So, for example, if I have data from Jan 1, 2000 to Dec 31, 2010, and some days are missing, when a user requests a timespan that begins before, ends after, or encompasses the missing data points, I need to "fill in" these missing values.
Is there a proper term to refer to this concept of filling in data? Imputation is one term, don't know if it is "the" term for it though.
I presume there are multiple algorithms & methodologies for filling in missing data (use last measured, using median/average/moving average, etc between 2 known numbers, etc.
Anyone know the proper term for this problem, any online resources on this topic, or ideally links to open source implementations of some algorithms (C# preferably, but any language would be useful)
The term you're looking for is interpolation. (obligatory wiki link)
You're asking for a C# solution with datasets but you should also consider doing this at the database level like this.
An simple, brute-force approach in C# could be to build an array of consecutive dates with your beginning and ending values as the min/max values. Then use that array to merge "interpolated" date values into your data set by inserting rows where there is no matching date for your date array in the dataset.
Here is an SO post that gets close to what you need: interpolating missing dates with C#. There is no accepted solution but reading the question and attempts at answers may give you an idea of what you need to do next. E.g. Use the DateTime data in terms of Ticks (long value type) and then use an interpolation scheme on that data. The convert the interpolated long values to DateTime values.
The algorithm you use will depend a lot on the data itself, the size of the gaps compared to the available data, and its predictability based on existing data. It could also incorporate other information you might know about what's missing, as is common in statistics, when your actual data may not reflect the same distribution as the universe across certain categories.
Linear and cubic interpolation are typical algortihms that are not difficult to implement, try googling those.
Here's a good primer with some code:
http://paulbourke.net/miscellaneous/interpolation/
The context of the discussion in that link is graphics but the concepts are universally applicable.
For the purpose of feeding statistical tests, a good search term is imputation - e.g. http://en.wikipedia.org/wiki/Imputation_%28statistics%29
Alright quick overview
I have looked into the knapsack problem
http://en.wikipedia.org/wiki/Knapsack_problem
and i know it is what i need for my project, but the complicated part of my project would be that i need multiple sacks inside a main sack.
The large knapsack that holds all the "bags" can only carry x amount of "bags" (lets say 9 for sake of example). Each bag has different values;
Weight
Cost
Size
Capacity
and so on, all of those values are integer numbers. Lets assume from 0-100.
The inner bag will also be assigned a type, and there can only be one of that type within the outer bag, although the program input will be given multiple of the same type.
I need to assign a maximum weight that the main bag can hold, and all other properties of the smaller bags need to be grouped by weighted values.
Example
Outer Bag:
Can hold 9 smaller bags
Weight no more than 98 [Give or take 5 either side]
Must hold one of each type, Can only hold one of each type at a time.
Inner Bags:
Cost, Weighted at 100%
Size, Weighted at 67%
Capacity, Weighted at 44%
The program will be given an input of multiple bags, and then must work out combinations of Smaller Bags to go into the larger bag, there will be multiple solutions depending on the input, and the program would output the best solutions for me.
I am wondering what you guys think the best way for me to approach this would be.
I will be programming it in either Java, or C#. I would love to program it in PHP but i'm afraid the algorithm would be very inefficient for web servers.
Thanks for any help you can give
-Zack
Okay, well, knapsack is NP-hard so I'm pretty certain this will be NP-hard as well (if it weren't you could solve knapsack by doing this with only one outer bag.) So for an exactly optimal solution, you're probably going to be able to do no beter than searching all combinations. So the outline of the program you want will be like
for each possible combination
do
if current combination is better than best previous
save current combination as best so far
fi
od
and the run time will be exponential. It sounds, though, like you might be able to get a near solution with dynamic programming.
Consider using Prolog for your logical programming. There's multiple implementations of it including P# on mono (.NET). Theres a bit of a learning curve, but once you get used to it, it's pretty much in a league of its own for this kind of problem solving.
Hope this helps. Cheers!
link to P#