I wrote up a quick c# extension method, but wasn't sure if there was a cleaner way to accomplish what I want to do. It does work, but feels slightly hacky using the string repeater, and inserting a decimal.
Goal is that at the application level, we can clean / fix any data problems before sending off to the database to prevent overflows.
Note: PCL Library, cant reference outside DLL's in this case.
public static bool TsqlDecimalBoundariesCheck(this decimal valueToCheck, int precision, int scale)
{
if(scale > precision) throw new ArgumentException($"BOUNDARY CHECK: Scale [{scale}] must not be higher than Percision [{precision}]");
// create X precision values of the value 9
var precisionValue = new string('9', precision);
// Insert the decimal place x positions from the right
if (scale > 0)
{
precisionValue = precisionValue.Insert((precision - scale), ".");
}
// Get the upper and lower values
var upperBoundry = decimal.Parse(precisionValue);
var lowerBoundry = upperBoundry * -1;
return (valueToCheck <= upperBoundry) && (valueToCheck >= lowerBoundry);
}
And some quick unit tests to accompany it:
[TestMethod]
public void TestBoundryConstraints()
{
var precision = 4;
var scale = 1;
var testValue = 1000m;
var result = testValue.TsqlDecimalBoundariesCheck(precision , scale);
Assert.IsFalse(result, $"Value {testValue} is expected to be outside Decimal({precision }, {scale})");
testValue = -1000m;
result = testValue.TsqlDecimalBoundariesCheck(precision , scale);
Assert.IsFalse(result, $"Value {testValue} is expected to be outside Decimal({precision }, {scale})");
testValue = 100m;
result = testValue.TsqlDecimalBoundariesCheck(precision , scale);
Assert.IsTrue(result, $"Value {testValue} is expected to be within Decimal({precision }, {scale})");
testValue = 999.9m;
result = testValue.TsqlDecimalBoundariesCheck(precision , scale);
Assert.IsTrue(result, $"Value {testValue} is expected to be within Decimal({precision }, {scale})");
testValue = -999.9m;
result = testValue.TsqlDecimalBoundariesCheck(precision , scale);
Assert.IsTrue(result, $"Value {testValue} is expected to be within Decimal({precision }, {scale})");
}
So you can definitely get rid of the hacky string repeating by doing (10^p - 1) * (10^-s) to get your upper and lower bounds.
If you want to check to make sure scale doesn't get truncated, you can actually truncate it and then compare the values. If the truncated value and the original value are the same, the scale is valid.
Putting it all together, you get something like this:
public static bool TsqlDecimalBoundariesCheck(this decimal valueToCheck, int precision, int scale)
{
if (scale > precision) throw new ArgumentException($"BOUNDARY CHECK: Scale [{scale}] must not be higher than Percision [{precision}]");
//Upper/lower bounds
var step = (decimal)Math.Pow(10, precision);
var upperBoundry = (step - 1) * (decimal)Math.Pow(10, -scale);
var lowerBoundry = -1 * upperBoundry;
//Truncate decimal to scale
//If the truncated value does not equal the original, it must've been out of scale
step = (decimal)Math.Pow(10, scale);
var truncated = Math.Truncate(step * valueToCheck) / step;
return (valueToCheck <= upperBoundry)
&& (valueToCheck >= lowerBoundry)
&& truncated == valueToCheck;
}
Related
In my C# program I have a dataset where each data point consists of:
a stimulus intensity (intensity) as x-coordinate
the percentage of correct response (percentageCorrect) to stimulus as y-coordinate
When the intensity is low percentageCorrect is low. When the intensity is high the percentageCorrect is high. The function graph is an S-shaped curve as the percentageCorrect reaches an asymptote at low and high ends.
I am trying to find the threshold intensity where percentageCorrect is half way between the asymtotes at either end (center of the S-shaped curve)
I understand this to be a function maximization problem that can be solved by the Nelder Meade Simplex algorithm.
I am trying to solve my problem using the Nelder Meade Simplex algorithm in mathdotnet and its IObjectiveFunction parameter.
However, I am having trouble understanding the API of the NedlerMeadeSimplex class FindMinimum method and the IObjectiveFunction EvaluateAt method.
I am new to numerical analysis that is pre-requisite for this question.
Specific questions are:
For the NedlerMeadeSimplex class FindMinimum method what are the initialGuess and initialPertubation parameters?
For the IObjectiveFunction EvaluateAt method, what is the point parameter? I vaguely understand that the point parameter is a datum in the dataset being minimized
How can I map my data set to this API and solve my problem?
Thanks for any guidance on this.
The initial guess is a guess at the model parameters.
I've always used the forms that don't require an entry of the initialPertubation parameter, so I can't help you there.
The objective function is what your are trying to minimize. For example, for a least squares fit, it would calculate the sum of squared areas at the point given in the argument. Something like this:
private double SumSqError(Vector<double> v)
{
double err = 0;
for (int i = 0; i < 100; i++)
{
double y_val = v[0] + v[1] * Math.Exp(v[2] * x[i]);
err += Math.Pow(y_val - y[i], 2);
}
return err;
}
You don't have to supply the point. The algorithm does that over and over while searching for the minimum. Note that the subroutine as access to the vector x.
Here is the code for a test program fitting a function to random data:
private void btnMinFit_Click(object sender, EventArgs e)
{
Random RanGen = new Random();
x = new double[100];
y = new double[100];
// fit exponential expression with three parameters
double a = 5.0;
double b = 0.5;
double c = 0.05;
// create data set
for (int i = 0; i < 100; i++) x[i] = 10 + Convert.ToDouble(i) * 90.0 / 99.0; // values span 10 to 100
for (int i = 0; i < 100; i++)
{
double y_val = a + b * Math.Exp(c * x[i]);
y[i] = y_val + 0.1 * RanGen.NextDouble() * y_val; // add error term scaled to y-value
}
// var fphv = new Func<double, double, double, double>((x, A, B) => A * x + B * x + A * B * x * x); extraneous test
var f1 = new Func<Vector<double>, double>(x => LogEval(x));
var obj = ObjectiveFunction.Value(f1);
var solver = new NelderMeadSimplex(1e-5, maximumIterations: 10000);
var initialGuess = new DenseVector(new[] { 3.0, 6.0, 0.6 });
var result = solver.FindMinimum(obj, initialGuess);
Console.WriteLine(result.MinimizingPoint.ToString());
}
I'm writing a small technical analysis library that consists of items that are not availabile in TA-lib. I've started with an example I found on cTrader and matched it against the code found in the TradingView version.
Here's the Pine Script code from TradingView:
len = input(9, minval=1, title="Length")
high_ = highest(hl2, len)
low_ = lowest(hl2, len)
round_(val) => val > .99 ? .999 : val < -.99 ? -.999 : val
value = 0.0
value := round_(.66 * ((hl2 - low_) / max(high_ - low_, .001) - .5) + .67 * nz(value[1]))
fish1 = 0.0
fish1 := .5 * log((1 + value) / max(1 - value, .001)) + .5 * nz(fish1[1])
fish2 = fish1[1]
Here's my attempt to implement the indicator:
public class FisherTransform : IndicatorBase
{
public int Length = 9;
public decimal[] Fish { get; set; }
public decimal[] Trigger { get; set; }
decimal _maxHigh;
decimal _minLow;
private decimal _value1;
private decimal _lastValue1;
public FisherTransform(IEnumerable<Candle> candles, int length)
: base(candles)
{
Length = length;
RequiredCount = Length;
_lastValue1 = 1;
}
protected override void Initialize()
{
Fish = new decimal[Series.Length];
Trigger = new decimal[Series.Length];
}
public override void Compute(int startIndex = 0, int? endIndex = null)
{
if (endIndex == null)
endIndex = Series.Length;
for (int index = 0; index < endIndex; index++)
{
if (index == 1)
{
Fish[index - 1] = 1;
}
_minLow = Series.Average.Lowest(Length, index);
_maxHigh = Series.Average.Highest(Length, index);
_value1 = Maths.Normalize(0.66m * ((Maths.Divide(Series.Average[index] - _minLow, Math.Max(_maxHigh - _minLow, 0.001m)) - 0.5m) + 0.67m * _lastValue1));
_lastValue1 = _value1;
Fish[index] = 0.5m * Maths.Log(Maths.Divide(1 + _value1, Math.Max(1 - _value1, .001m))) + 0.5m * Fish[index - 1];
Trigger[index] = Fish[index - 1];
}
}
}
IndicatorBase class and CandleSeries class
Math Helpers
The problem
The output values appear to be within the expected range however my Fisher Transform cross-overs do not match up with what I am seeing on TradingView's version of the indicator.
Question
How do I properly implement the Fisher Transform indicator in C#? I'd like this to match TradingView's Fisher Transform output.
What I Know
I've check my data against other indicators that I have personally written and indicators from TA-Lib and those indicators pass my unit tests. I've also checked my data against the TradingView data candle by candle and found that my data matches as expected. So I don't suspect my data is the issue.
Specifics
CSV Data - NFLX 5 min agg
Pictured below is the above-shown Fisher Transform code applied to a TradingView chart. My goal is to match this output as close as possible.
Fisher Cyan
Trigger Magenta
Expected Outputs:
Crossover completed at 15:30 ET
Approx Fisher Value is 2.86
Approx Trigger Value is 1.79
Crossover completed at 10:45 ET
Approx Fisher Value is -3.67
Approx Trigger Value is -3.10
My Actual Outputs:
Crossover completed at 15:30 ET
My Fisher Value is 1.64
My Trigger Value is 1.99
Crossover completed at 10:45 ET
My Fisher Value is -1.63
My Trigger Value is -2.00
Bounty
To make your life easier I'm including a small console application
complete with passing and failing unit tests. All unit tests are
conducted against the same data set. The passing unit tests are from a
tested working Simple Moving Average indicator. The failing unit
tests are against the Fisher Transform indicator in question.
Project Files (updated 5/14)
Help get my FisherTransform tests to pass and I'll award the bounty.
Just comment if you need any additional resources or information.
Alternative Answers that I'll consider
Submit your own working FisherTransform in C#
Explain why my FisherTransform is actually working as expected
The code has two errors.
1) wrong extra brackets. The correct line is:
_value1 = Maths.Normalize(0.66m * (Maths.Divide(Series.Average[index] - _minLow, Math.Max(_maxHigh - _minLow, 0.001m)) - 0.5m) + 0.67m * _lastValue1);
2) Min and max functions must be:
public static decimal Highest(this decimal[] series, int length, int index)
{
var maxVal = series[index]; // <----- HERE WAS AN ERROR!
var lookback = Math.Max(index - length, 0);
for (int i = index; i-- > lookback;)
maxVal = Math.Max(series[i], maxVal);
return maxVal;
}
public static decimal Lowest(this decimal[] series, int length, int index)
{
var minVal = series[index]; // <----- HERE WAS AN ERROR!
var lookback = Math.Max(index - length, 0);
for (int i = index; i-- > lookback;)
{
//if (series[i] != 0) // <----- HERE WAS AN ERROR!
minVal = Math.Min(series[i], minVal);
}
return minVal;
}
3) confusing test params. Please recheck your unittest values. AFTER THE UPDATE TESTS STILL NOT FIXED. For an example, the first FisherTransforms_ValuesAreReasonablyClose_First() has mixed values
var fish = result.Fish.Last(); //is equal to -3.1113144510775780365063063706
var trig = result.Trigger.Last(); //is equal to -3.6057793808025449204415435710
// TradingView Values for NFLX 5m chart at 10:45 ET
var fisherValue = -3.67m;
var triggerValue = -3.10m;
Suppose I have number 87.6 of type double here I want to round it, so I have applied C# build in method of round to get the output something like this
double test2 = 87.6;
Console.WriteLine(Math.Round(test2, 0));
this will generate 88 which is fine. However, I wanted to be round back to 87 my logic would be on 0.8 and not on 0.5. So for instance if my input is 87.8 then I want to get 88 and if my input is 88.7 then I want to round it to 87.
I've got the answer from the comment section here is the logic for this
double test2 = 87.6;
test2 -= 0.3;
Console.WriteLine(Math.Round(test2, 0));
This 0.3 will make the difference
I think this would work:
public static class RoundingExtensions {
public static int RoundWithBreak(this valueToRound, double breakValue = .5) {
if (breakValue <= 0 || breakValue >= 1) { throw new Exception("Must be between 0 and 1") }
var difference = breakValue - .5;
var min = Math.Floor(breakValue);
var toReturn = Math.Round(breakValue - difference, 0);
return toReturn < min ? min : toReturn;
}
}
Consumed:
var test = 8.7;
var result = test.RoundWithBreak(.8);
I have recently started using Mathdotnet Numerics statistical package to do data analysis in c#.
I am looking for the cross correlation function. Does Mathdotnet have an API for this?
Previously I have been using MATLAB xcorr or Python numpy.correlate. So I am looking for a C# equivalent of these.
I have looked through their documentation but it isn't very straightforward.
https://numerics.mathdotnet.com/api/
Correlation can be calculated by any of the methods from MathNet.Numerics.Statistics.Correlation, like Pearson or Spearman. But if you're looking for results like the ones provided by Matlab's xcorr or autocorr, then you have to manually calculate the correlation using those methods for each lag/delay value between your input samples. Notice this example includes both, cross and auto correlation.
double fs = 50; //sampling rate, Hz
double te = 1; //end time, seconds
int size = (int)(fs * te); //sample size
var t = Enumerable.Range(0, size).Select(p => p / fs).ToArray();
var y1 = t.Select(p => p < te / 2 ? 1.0 : 0).ToArray();
var y2 = t.Select(p => p < te / 2 ? 1.0 - 2*p : 0).ToArray();
var r12 = StatsHelper.CrossCorrelation(y1, y2); // Y1 * Y2
var r21 = StatsHelper.CrossCorrelation(y2, y1); // Y2 * Y1
var r11 = StatsHelper.CrossCorrelation(y1, y1); // Y1 * Y1 autocorrelation
StatsHelper:
public static class StatsHelper
{
public static LagCorr CrossCorrelation(double[] x1, double[] x2)
{
if (x1.Length != x2.Length)
throw new Exception("Samples must have same size.");
var len = x1.Length;
var len2 = 2 * len;
var len3 = 3 * len;
var s1 = new double[len3];
var s2 = new double[len3];
var cor = new double[len2];
var lag = new double[len2];
Array.Copy(x1, 0, s1, len, len);
Array.Copy(x2, 0, s2, 0, len);
for (int i = 0; i < len2; i++)
{
cor[i] = Correlation.Pearson(s1, s2);
lag[i] = i - len;
Array.Copy(s2,0,s2,1,s2.Length-1);
s2[0] = 0;
}
return new LagCorr { Corr = cor, Lag = lag };
}
}
LagCorr:
public class LagCorr
{
public double[] Lag { get; set; }
public double[] Corr { get; set; }
}
EDIT: Adding Matlab comparison results:
clear;
step=0.02;
t=[0:step:1-step];
y1=ones(1,50);
y1(26:50)=0;
y2=[1-2*t];
y2(26:50)=0;
[cor12,lags12]=xcorr(y1,y2);
[cor21,lags21]=xcorr(y2,y1);
[cor11,lags11]=xcorr(y1,y1);
[cor22,lags22]=xcorr(y2,y2);
subplot(2,3,1);
plot(t,y1);
title('Y1');
axis([0 1 -0.5 1.5]);
subplot(2,3,2);
plot(lags12,cor12);
title('Y1*Y2');
axis([-30 30 0 15]);
subplot(2,3,3);
plot(lags11,cor11);
title('Y1*Y1');
axis([-30 30 0 30]);
subplot(2,3,4);
plot(t,y2);
title('Y2');
axis([0 1 -0.5 1.5]);
subplot(2,3,5);
plot(lags21,cor21);
title('Y2*Y1');
axis([-30 30 0 15]);
subplot(2,3,6);
plot(lags22,cor22);
title('Y2*Y2');
axis([-30 30 0 10]);
I have tried the above solution with a sine wave that was shifted backwards by 20 time units with respect to a first sine wave. It gave me the correct result that the maximum of the correlation is at -20 (see below). One could discuss whether its appropriate to apply a zero padding, the zeros are not usually part of the signal. The MATLAB cross-correlation is not normalized the same way, it's not a "Pearson correlation" as in the example above.
The definition of the MATLAB cross-correlation is different: for scaling option "none" its a convolution with the time reversed signal. There are also various scaling options but none of them gives the same result as the Pearson correlation:
matlab definition of xcorr
My result: cross correlation of sin(n*0.1) with sin(n*0.1 - 20*0.1) using the example above:
We have a very data intensive system. It stores raw data, then computes percentages based on the number of correct responses / total trials.
Recently we have had customers who want to import old data into our system.
I need a way to covert a percentage to the nearest fraction.
Examples.
33% needs to give me 2/6. EVEN though 1/3 is .33333333
67% needs to give me 4/6. EVEN though 4/6 is .6666667
I realize I could just compute that to be 67/100, but that means i'd have to add 100 data points to the system when 6 would suffice.
Does anyone have any ideas?
EDIT
Denominator could be anything. They are giving me a raw, rounded percentage and i'm trying to get as close to it with RAW data as possible
Your requirements are contradicting: On the one hand, you want to "convert a percentage to the nearest fraction" (*), but on the other hand, you want fractions with small(est) numbers. You need to find some compromise when/how to drop precision in favor of smaller numbers. Your problem as it stands is not solvable.
(*) The nearest fraction f for any given (integer) percentage n is n/100. Per definition.
I have tried to satisfy your requirement by using continued fractions. By limiting the depth to three I got a reasonable approximation.
I failed to come up with an iterative (or recursive) approach in resonable time. Nevertheless I have cleaned it up a little. (I know that 3 letter variable names are not good but I can't think of good names for them :-/ )
The code gives you the best rational approximation within the specified tolerance it can find. The resulting fraction is reduced and is the best approximation among all fractions with the same or lower denominator.
public partial class Form1 : Form
{
Random rand = new Random();
public Form1()
{
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
for (int i = 0; i < 10; i++)
{
double value = rand.NextDouble();
var fraction = getFraction(value);
var numerator = fraction.Key;
var denominator = fraction.Value;
System.Console.WriteLine(string.Format("Value {0:0.0000} approximated by {1}/{2} = {3:0.0000}", value, numerator, denominator, (double)numerator / denominator));
}
/*
Output:
Value 0,4691 approximated by 8/17 = 0,4706
Value 0,0740 approximated by 1/14 = 0,0714
Value 0,7690 approximated by 3/4 = 0,7500
Value 0,7450 approximated by 3/4 = 0,7500
Value 0,3748 approximated by 3/8 = 0,3750
Value 0,7324 approximated by 3/4 = 0,7500
Value 0,5975 approximated by 3/5 = 0,6000
Value 0,7544 approximated by 3/4 = 0,7500
Value 0,7212 approximated by 5/7 = 0,7143
Value 0,0469 approximated by 1/21 = 0,0476
Value 0,2755 approximated by 2/7 = 0,2857
Value 0,8763 approximated by 7/8 = 0,8750
Value 0,8255 approximated by 5/6 = 0,8333
Value 0,6170 approximated by 3/5 = 0,6000
Value 0,3692 approximated by 3/8 = 0,3750
Value 0,8057 approximated by 4/5 = 0,8000
Value 0,3928 approximated by 2/5 = 0,4000
Value 0,0235 approximated by 1/43 = 0,0233
Value 0,8528 approximated by 6/7 = 0,8571
Value 0,4536 approximated by 5/11 = 0,4545
*/
}
private KeyValuePair<int, int> getFraction(double value, double tolerance = 0.02)
{
double f0 = 1 / value;
double f1 = 1 / (f0 - Math.Truncate(f0));
int a_t = (int)Math.Truncate(f0);
int a_r = (int)Math.Round(f0);
int b_t = (int)Math.Truncate(f1);
int b_r = (int) Math.Round(f1);
int c = (int)Math.Round(1 / (f1 - Math.Truncate(f1)));
if (Math.Abs(1.0 / a_r - value) <= tolerance)
return new KeyValuePair<int, int>(1, a_r);
else if (Math.Abs(b_r / (a_t * b_r + 1.0) - value) <= tolerance)
return new KeyValuePair<int, int>(b_r, a_t * b_r + 1);
else
return new KeyValuePair<int, int>(c * b_t + 1, c * a_t * b_t + a_t + c);
}
}
Would it have to return 2/6 rather than 1/3? If its always in 6ths, then
Math.Round((33 * 6)/100) = 2
Answering my own question here. Would this work?
public static Fraction Convert(decimal value) {
for (decimal numerator = 1; numerator <= 10; numerator++) {
for (decimal denomenator = 1; denomenator < 10; denomenator++) {
var result = numerator / denomenator;
if (Math.Abs(value - result) < .01m)
return new Fraction() { Numerator = numerator, Denomenator = denomenator };
}
}
throw new Exception();
}
This will keep my denominator below 10.