C# Interpolating small array into larger array - c#

I have two arrays of doubles X and Y of same size (any size is possible).
The difference between x-points is always stable. For example X[1]-X[0] = X[2]-X[1]...
These two arrays together form a curve (say C1) of (x[i],y[i]) points.
I have another another fixed curve C2 shown in the figure:
x-limit , y-limit
(0) ,(-5)
(1e9) ,(-5)
(2e9) ,(-6.5)
(3e9) ,(-9.2)
(6e9) ,(-16.5)
(12e9) ,(-29)
I want to be able to check if C1 hits or crosses under C2 but I can't do that due to the difference in size.
What I thought of is interpolating the arrays of C2 of size 6 into arrays of size equal to the size of the arrays of C1
This way I would have two Y-arrays of same size (one from C1 and one from C2) and I can check for the difference. (If negative then it is below the limit)
My question is what can I use to interpolate the small array into a larger one keeping into consideration the difference between y-points, and is there any other way I could use to check is my initial arrays exceed the limit.
If there anything I could add that is not clear let me know and thank you.

Rather than storing interpolated points, I would calculate and use them on the fly.
Assuming that the points to be compared are sorted in ascending order, I would proceed as follows:
double limitX1 = limitX[0];
double limitY1 = limitY[0];
// Create fake point to the left
double limitX0 = -2;
double limitY0 = limitY1;
double f = 0.0;
int actualLimitIndex = 0;
for (int i = 0; i < x.Length; i++) {
double x = x[i];
double y = y[i];
// Get current limit section
while (x > limitX1) {
actualLimitIndex++;
limitX0 = limitX1;
limitY0 = limitY1;
limitX1 = limitX[actualLimitIndex];
limitY1 = limitY[actualLimitIndex];
f = (limitY1 - limitY0) / (limitX1 - limitX0);
}
// Interpolate limit
limitY = limitY0 + (x - limitX0) * f;
// Do whatever you have to do here.
if (y > limitY) {
// We are above limit
} else {
// We are at or below limit
}
}
This also assumes that no x-value is below the minimum or above the maximum x-limit. Add additional checks if this can be the case.
This uses a linear interpolation. See: Linear interpolation (Wikipedia).

Related

'Grokkable' algorithm to understand exponentiation where the exponent is floating point

To clarify first:
2^3 = 8. That's equivalent to 2*2*2. Easy.
2^4 = 16. That's equivalent to 2*2*2*2. Also easy.
2^3.5 = 11.313708... Er, that's not so easy to grok.
Want I want is a simple algorithm which most clearly shows how 2^3.5 = 11.313708. It should preferably not use any functions apart from the basic addition, subtract, multiply, or divide operators.
The code certainly doesn't have to be fast, nor does it necessarily need to be short (though that would help). Don't worry, it can be approximate to a given user-specified accuracy (which should also be part of the algorithm). I'm hoping there will be a binary chop/search type thing going on, as that's pretty simple to grok.
So far I've found this, but the top answer is far from simple to understand on a conceptual level.
The more answers the merrier, so I can try to understand different ways of attacking the problem.
My language preference for the answer would be C#/C/C++/Java, or pseudocode for all I care.
Ok, let's implement pow(x, y) using only binary searches, addition and multiplication.
Driving y below 1
First, take this out of the way:
pow(x, y) == pow(x*x, y/2)
pow(x, y) == 1/pow(x, -y)
This is important to handle negative exponents and drive y below 1, where things start getting interesting. This reduces the problem to finding pow(x, y) where 0<y<1.
Implementing sqrt
In this answer I assume you know how to perform sqrt. I know sqrt(x) = x^(1/2), but it is easy to implement it just using a binary search to find y = sqrt(x) using y*y=x search function, e.g.:
#define EPS 1e-8
double sqrt2(double x) {
double a = 0, b = x>1 ? x : 1;
while(abs(a-b) > EPS) {
double y = (a+b)/2;
if (y*y > x) b = y; else a = y;
}
return a;
}
Finding the answer
The rationale is that every number below 1 can be approximated as a sum of fractions 1/2^x:
0.875 = 1/2 + 1/4 + 1/8
0.333333... = 1/4 + 1/16 + 1/64 + 1/256 + ...
If you find those fractions, you actually find that:
x^0.875 = x^(1/2+1/4+1/8) = x^(1/2) * x^(1/4) * x^(1/8)
That ultimately leads to
sqrt(x) * sqrt(sqrt(x)) * sqrt(sqrt(sqrt(x)))
So, implementation (in C++)
#define EPS 1e-8
double pow2(double x, double y){
if (x < 0 and abs(round(y)-y) < EPS) {
return pow2(-x, y) * ((int)round(y)%2==1 ? -1 : 1);
} else if (y < 0) {
return 1/pow2(x, -y);
} else if(y > 1) {
return pow2(x * x, y / 2);
} else {
double fraction = 1;
double result = 1;
while(y > EPS) {
if (y >= fraction) {
y -= fraction;
result *= x;
}
fraction /= 2;
x = sqrt2(x);
}
return result;
}
}
Deriving ideas from the other excellent posts, I came up with my own implementation. The answer is based on the idea that base^(exponent*accuracy) = answer^accuracy. Given that we know the base, exponent and accuracy variables beforehand, we can perform a search (binary chop or whatever) so that the equation can be balanced by finding answer. We want the exponent in both sides of the equation to be an integer (otherwise we're back to square one), so we can make accuracy any size we like, and then round it to the nearest integer afterwards.
I've given two ways of doing it. The first is very slow, and will often produce extremely high numbers which won't work with most languages. On the other hand, it doesn't use log, and is simpler conceptually.
public double powSimple(double a, double b)
{
int accuracy = 10;
bool negExponent = b < 0;
b = Math.Abs(b);
bool ansMoreThanA = (a>1 && b>1) || (a<1 && b<1); // Example 0.5^2=0.25 so answer is lower than A.
double accuracy2 = 1.0 + 1.0 / accuracy;
double total = a;
for (int i = 1; i < accuracy* b; i++) total = total*a;
double t = a;
while (true) {
double t2 = t;
for(int i = 1; i < accuracy; i++) t2 = t2 * t; // Not even a binary search. We just hunt forwards by a certain increment
if((ansMoreThanA && t2 > total) || (!ansMoreThanA && t2 < total)) break;
if (ansMoreThanA) t *= accuracy2; else t /= accuracy2;
}
if (negExponent) t = 1 / t;
return t;
}
This one below is a little more involved as it uses log(). But it is much quicker and doesn't suffer from the super-high number problems as above.
public double powSimple2(double a, double b)
{
int accuracy = 1000000;
bool negExponent= b<0;
b = Math.Abs(b);
double accuracy2 = 1.0 + 1.0 / accuracy;
bool ansMoreThanA = (a>1 && b>1) || (a<1 && b<1); // Example 0.5^2=0.25 so answer is lower than A.
double total = Math.Log(a) * accuracy * b;
double t = a;
while (true) {
double t2 = Math.Log(t) * accuracy;
if ((ansMoreThanA && t2 > total) || (!ansMoreThanA && t2 < total)) break;
if (ansMoreThanA) t *= accuracy2; else t /= accuracy2;
}
if (negExponent) t = 1 / t;
return t;
}
You can verify that 2^3.5 = 11.313708 very easily: check that 11.313708^2 = (2^3.5)^2 = 2^7 = 128
I think the easiest way to understand the computation you would actually do for this would be to refresh your understanding of logarithms - one starting point would be http://en.wikipedia.org/wiki/Logarithm#Exponentiation.
If you really want to compute non-integer powers with minimal technology one way to do that would be to express them as fractions with denominator a power of two and then take lots of square roots. E.g. x^3.75 = x^3 * x^(1/2) * x^(1/4) then x^(1/2) = sqrt(x), x^(1/4) = sqrt(sqrt(x)) and so on.
Here is another approach, based on the idea of verifying a guess. Given y, you want to find x such that x^(a/b) = y, where a and b are integers. This equation implies that x^a = y^b. You can calculate y^b, since you know both numbers. You know a, so you can - as you originally suspected - use binary chop or perhaps some numerically more efficient algorithm to solve x^a = y^b for x by simply guessing x, computing x^a for this guess, comparing it with y^b, and then iteratively improving the guess.
Example: suppose we wish to find 2^0.878 by this method. Then set a = 439, b = 500, so we wish to find 2^(439/500). If we set x=2^(439/500) we have x^500 = 2^439, so compute 2^439 and (by binary chop or otherwise) find x such that x^500 = 2^439.
Most of it comes down to being able to invert the power operation.
In other words, the basic idea is that (for example) N2 should be basically the "opposite" of N1/2 so that if you do something like:
M = N2
L = M1/2
Then the result you get in L should be the same as the original value in N (ignoring any rounding and such).
Mathematically, that means that N1/2 is the same as sqrt(N), N1/3 is the cube root of N, and so on.
The next step after that would be something like N3/2. This is pretty much the same idea: the denominator is a root, and the numerator is a power, so N3/2 is the square root of the cube of N (or the cube of the square root of N--works out the same).
With decimals, we're just expressing a fraction in a slightly different form, so something like N3.14 can be viewed as N314/100--the hundredth root of N raised to the power 314.
As far as how you compute these: there are quite a few different ways, depending heavily on the compromise you prefer between complexity (chip area, if you're implementing it in hardware) and speed. The obvious way is to use a logarithm: AB = Log-1(Log(A)*B).
For a more restricted set of inputs, such as just finding the square root of N, you can often do better than that extremely general method though. For example, the binary reducing method is quite fast--implemented in software, it's still about the same speed as Intel's FSQRT instruction.
As stated in the comments, its not clear if you want a mathematical description of how fractional powers work, or an algorithm to calculate fractional powers.
I will assume the latter.
For almost all functions (like y = 2^x) there is a means of approximating the function using a thing called the Taylor Series http://en.wikipedia.org/wiki/Taylor_series. This approximates any reasonably behaved function as a polynomial, and polynomials can be calculated using only multiplication, division, addition and subtraction (all of which the CPU can do directly). If you calculate the Taylor series for y = 2^x and plug in x = 3.5 you will get 11.313...
This almost certainly not how exponentiation is actually done on your computer. There are many algorithms which run faster for different inputs. For example, if you calculate 2^3.5 using the Taylor series, then you would have to look at many terms to calculate it with any accuracy. However, the Taylor series will converge much faster for x = 0.5 than for x = 3.5. So one obvious improvement is to calculate 2^3.5 as 2^3 * 2^0.5, as 2^3 is easy to calculate directly. Modern exponentiation algorithms will use many, many tricks to speed up processing - but the principle is still much the same, approximate the exponentiation function as some infinite sum, and calculate as many terms as you need to get the accuracy that is required.

Percentile algorithm

I am writing a program that finds percentile. According to eHow:
Start to calculate the percentile of your test score (as an example we’ll stick with your score of 87). The formula to use is L/N(100) = P where L is the number of tests with scores less than 87, N is the total number of test scores (here 150) and P is the percentile. Count up the total number of test scores that are less than 87. We’ll assume the number is 113. This gives us L = 113 and N = 150.
And so, according to the instructions, I wrote:
string[] n = Interaction.InputBox("Enter the data set. The numbers do not have to be sorted.").Split(',');
List<Single> x = new List<Single> { };
foreach (string i in n)
{
x.Add(Single.Parse(i));
}
x.Sort();
List<double> lowerThan = new List<double> { };
Single score = Single.Parse(Interaction.InputBox("Enter the number."));
uint length = (uint)x.Count;
foreach (Single index in x)
{
if (index > score)
{
lowerThan.Add(index);
}
}
uint lowerThanCount = (uint)lowerThan.Count();
double percentile = lowerThanCount / length * 100;
MessageBox.Show("" + percentile);
Yet the program always returns 0 as the percentile! What errors have I made?
Your calculation
double percentile = lowerThanCount / length * 100;
is all done in integers, since the right hand side consist of all integers. Atleast one of the operand should be of floating point type. So
double percentile = (float) lowerThanCount / length * 100;
This is effectively a rounding problem, lowerThanCount / length are both unit therefore don't support decimal places so any natural percentage calculation (e.g. 0.2/0.5) would result in 0.
For example, If we were to assume lowerThanCount = 10 and length = 20, the sum would look something like
double result = (10 / 20) * 100
Therefore results in
(10 / 20) = 0.5 * 100
As 0.5 cannot be represented as an integer the floating point is truncated which leaves you with 0, so the final calculation eventually becomes
0 * 100 = 0;
You can fix this by forcing the calculation to work with a floating point type instead e.g.
double percentile = (double)lowerThanCount / length * 100
In terms of readability, it probably makes better sense to go with the cast in the calculation given lowerThanCount & length won't ever naturally be floating point numbers.
Also, your code could be simplified a lot using LINQ
string[] n = Interaction.InputBox("Enter the data set. The numbers do not have to be sorted.")
.Split(',');
IList<Single> x = n.Select(n => Single.Parse(n))
.OrderBy(x => x);
Single score = Single.Parse(Interaction.InputBox("Enter the number."));
IList<Single> lowerThan = x.Where(s => s < score);
Single percentile = (Single)lowerThan.Count / x.Count;
MessageBox.Show(percentile.ToString("%"));
The problem is in the types that you used for your variables: in this expression
double percentile = lowerThanCount / length * 100;
// ^^^^^^^^^^^^^^^^^^^^^^^
// | | |
// This is integer division; since length > lowerThanCount, its result is zero
the division is done on integers, so the result is going to be zero.
Change the type of lowerThanCount to double to fix this problem:
double lowerThanCount = (double)lowerThan.Count();
You are using integer division instead of floating point division. Cast length/lowerThanCount to a float before dividing.
Besides the percentile calculation (should be with floats), I think your count is off here:
foreach (Single index in x)
{
if (index > score)
{
lowerThan.Add(index);
}
}
You go through indexes and if they are larger than score, you put them into lowerThan
Just a logical mistake?
EDIT: for the percentile problem, here is my fix:
double percentile = ((double)lowerThanCount / (double)length) * 100.0;
You might not need all the (double)'s there, but just to be safe...

Fast Exp calculation: possible to improve accuracy without losing too much performance?

I am trying out the fast Exp(x) function that previously was described in this answer to an SO question on improving calculation speed in C#:
public static double Exp(double x)
{
var tmp = (long)(1512775 * x + 1072632447);
return BitConverter.Int64BitsToDouble(tmp << 32);
}
The expression is using some IEEE floating point "tricks" and is primarily intended for use in neural sets. The function is approximately 5 times faster than the regular Math.Exp(x) function.
Unfortunately, the numeric accuracy is only -4% -- +2% relative to the regular Math.Exp(x) function, ideally I would like to have accuracy within at least the sub-percent range.
I have plotted the quotient between the approximate and the regular Exp functions, and as can be seen in the graph the relative difference appears to be repeated with practically constant frequency.
Is it possible to take advantage of this regularity to improve the accuracy of the "fast exp" function further without substantially reducing the calculation speed, or would the computational overhead of an accuracy improvement outweigh the computational gain of the original expression?
(As a side note, I have also tried one of the alternative approaches proposed in the same SO question, but this approach does not seem to be computationally efficient in C#, at least not for the general case.)
UPDATE MAY 14
Upon request from #Adriano, I have now performed a very simple benchmark. I have performed 10 million computations using each of the alternative exp functions for floating point values in the range [-100, 100]. Since the range of values I am interested in spans from -20 to 0 I have also explicitly listed the function value at x = -5. Here are the results:
Math.Exp: 62.525 ms, exp(-5) = 0.00673794699908547
Empty function: 13.769 ms
ExpNeural: 14.867 ms, exp(-5) = 0.00675211846828461
ExpSeries8: 15.121 ms, exp(-5) = 0.00641270968867667
ExpSeries16: 32.046 ms, exp(-5) = 0.00673666189488182
exp1: 15.062 ms, exp(-5) = -12.3333325982094
exp2: 15.090 ms, exp(-5) = 13.708332516253
exp3: 16.251 ms, exp(-5) = -12.3333325982094
exp4: 17.924 ms, exp(-5) = 728.368055056781
exp5: 20.972 ms, exp(-5) = -6.13293614238501
exp6: 24.212 ms, exp(-5) = 3.55518353166184
exp7: 29.092 ms, exp(-5) = -1.8271053775984
exp7 +/-: 38.482 ms, exp(-5) = 0.00695945286970704
ExpNeural is equivalent to the Exp function specified in the beginning of this text. ExpSeries8 is the formulation that I originally claimed was not very efficient on .NET; when implementing it exactly like Neil it was actually very fast. ExpSeries16 is the analogous formula but with 16 multiplications instead of 8. exp1 through exp7 are the different functions from Adriano's answer below. The final variant of exp7 is a variant where the sign of x is checked; if negative the function returns 1/exp(-x) instead.
Unfortunately, neither of the expN functions listed by Adriano are sufficient in the broader negative value range I am considering. The series expansion approach by Neil Coffey seems to be more suitable in "my" value range, although it is too rapidly diverging with larger negative x, especially when using "only" 8 multiplications.
Taylor series approximations (such as the expX() functions in Adriano's answer) are most accurate near zero and can have huge errors at -20 or even -5. If the input has a known range, such as -20 to 0 like the original question, you can use a small look up table and one additional multiply to greatly improve accuracy.
The trick is to recognize that exp() can be separated into integer and fractional parts. For example:
exp(-2.345) = exp(-2.0) * exp(-0.345)
The fractional part will always be between -1 and 1, so a Taylor series approximation will be pretty accurate. The integer part has only 21 possible values for exp(-20) to exp(0), so these can be stored in a small look up table.
Try following alternatives (exp1 is faster, exp7 is more precise).
Code
public static double exp1(double x) {
return (6+x*(6+x*(3+x)))*0.16666666f;
}
public static double exp2(double x) {
return (24+x*(24+x*(12+x*(4+x))))*0.041666666f;
}
public static double exp3(double x) {
return (120+x*(120+x*(60+x*(20+x*(5+x)))))*0.0083333333f;
}
public static double exp4(double x) {
return 720+x*(720+x*(360+x*(120+x*(30+x*(6+x))))))*0.0013888888f;
}
public static double exp5(double x) {
return (5040+x*(5040+x*(2520+x*(840+x*(210+x*(42+x*(7+x)))))))*0.00019841269f;
}
public static double exp6(double x) {
return (40320+x*(40320+x*(20160+x*(6720+x*(1680+x*(336+x*(56+x*(8+x))))))))*2.4801587301e-5;
}
public static double exp7(double x) {
return (362880+x*(362880+x*(181440+x*(60480+x*(15120+x*(3024+x*(504+x*(72+x*(9+x)))))))))*2.75573192e-6;
}
Precision
Function Error in [-1...1] Error in [3.14...3.14]
exp1 0.05 1.8% 8.8742 38.40%
exp2 0.01 0.36% 4.8237 20.80%
exp3 0.0016152 0.59% 2.28 9.80%
exp4 0.0002263 0.0083% 0.9488 4.10%
exp5 0.0000279 0.001% 0.3516 1.50%
exp6 0.0000031 0.00011% 0.1172 0.50%
exp7 0.0000003 0.000011% 0.0355 0.15%
Credits
These implementations of exp() have been calculated by "scoofy" using Taylor series from a tanh() implementation of "fuzzpilz" (whoever they are, I just had these references on my code).
In case anyone wants to replicate the relative error function shown in the question, here's a way using Matlab (the "fast" exponent is not very fast in Matlab, but it is accurate):
t = 1072632447+[0:ceil(1512775*pi)];
x = (t - 1072632447)/1512775;
ex = exp(x);
t = uint64(t);
import java.lang.Double;
et = arrayfun( #(n) java.lang.Double.longBitsToDouble(bitshift(n,32)), t );
plot(x, et./ex);
Now, the period of the error exactly coincides with when the binary value of tmp overflows from the mantissa into the exponent. Let's break our data into bins by discarding the bits that become the exponent (making it periodic), and keeping only the high eight remaining bits (to make our lookup table a reasonable size):
index = bitshift(bitand(t,uint64(2^20-2^12)),-12) + 1;
Now we calculate the mean required adjustment:
relerrfix = ex./et;
adjust = NaN(1,256);
for i=1:256; adjust(i) = mean(relerrfix(index == i)); end;
et2 = et .* adjust(index);
The relative error is decreased to +/- .0006. Of course, other tables sizes are possible as well (for example, a 6-bit table with 64 entries gives +/- .0025) and the error is almost linear in table size. Linear interpolation between table entries would improve the error yet further, but at the expense of performance. Since we've already met the accuracy goal, let's avoid any further performance hits.
At this point it's some trivial editor skills to take the values computed by MatLab and create a lookup table in C#. For each computation, we add a bitmask, table lookup, and double-precision multiply.
static double FastExp(double x)
{
var tmp = (long)(1512775 * x + 1072632447);
int index = (int)(tmp >> 12) & 0xFF;
return BitConverter.Int64BitsToDouble(tmp << 32) * ExpAdjustment[index];
}
The speedup is very similar to the original code -- for my computer, this is about 30% faster compiled as x86 and about 3x as fast for x64. With mono on ideone, it's a substantial net loss (but so is the original).
Complete source code and testcase: http://ideone.com/UwNgx
using System;
using System.Diagnostics;
namespace fastexponent
{
class Program
{
static double[] ExpAdjustment = new double[256] {
1.040389835,
1.039159306,
1.037945888,
1.036749401,
1.035569671,
1.034406528,
1.033259801,
1.032129324,
1.031014933,
1.029916467,
1.028833767,
1.027766676,
1.02671504,
1.025678708,
1.02465753,
1.023651359,
1.022660049,
1.021683458,
1.020721446,
1.019773873,
1.018840604,
1.017921503,
1.017016438,
1.016125279,
1.015247897,
1.014384165,
1.013533958,
1.012697153,
1.011873629,
1.011063266,
1.010265947,
1.009481555,
1.008709975,
1.007951096,
1.007204805,
1.006470993,
1.005749552,
1.005040376,
1.004343358,
1.003658397,
1.002985389,
1.002324233,
1.001674831,
1.001037085,
1.000410897,
0.999796173,
0.999192819,
0.998600742,
0.998019851,
0.997450055,
0.996891266,
0.996343396,
0.995806358,
0.995280068,
0.99476444,
0.994259393,
0.993764844,
0.993280711,
0.992806917,
0.992343381,
0.991890026,
0.991446776,
0.991013555,
0.990590289,
0.990176903,
0.989773325,
0.989379484,
0.988995309,
0.988620729,
0.988255677,
0.987900083,
0.987553882,
0.987217006,
0.98688939,
0.98657097,
0.986261682,
0.985961463,
0.985670251,
0.985387985,
0.985114604,
0.984850048,
0.984594259,
0.984347178,
0.984108748,
0.983878911,
0.983657613,
0.983444797,
0.983240409,
0.983044394,
0.982856701,
0.982677276,
0.982506066,
0.982343022,
0.982188091,
0.982041225,
0.981902373,
0.981771487,
0.981648519,
0.981533421,
0.981426146,
0.981326648,
0.98123488,
0.981150798,
0.981074356,
0.981005511,
0.980944219,
0.980890437,
0.980844122,
0.980805232,
0.980773726,
0.980749562,
0.9807327,
0.9807231,
0.980720722,
0.980725528,
0.980737478,
0.980756534,
0.98078266,
0.980815817,
0.980855968,
0.980903079,
0.980955475,
0.981017942,
0.981085714,
0.981160303,
0.981241675,
0.981329796,
0.981424634,
0.981526154,
0.981634325,
0.981749114,
0.981870489,
0.981998419,
0.982132873,
0.98227382,
0.982421229,
0.982575072,
0.982735318,
0.982901937,
0.983074902,
0.983254183,
0.983439752,
0.983631582,
0.983829644,
0.984033912,
0.984244358,
0.984460956,
0.984683681,
0.984912505,
0.985147403,
0.985388349,
0.98563532,
0.98588829,
0.986147234,
0.986412128,
0.986682949,
0.986959673,
0.987242277,
0.987530737,
0.987825031,
0.988125136,
0.98843103,
0.988742691,
0.989060098,
0.989383229,
0.989712063,
0.990046579,
0.990386756,
0.990732574,
0.991084012,
0.991441052,
0.991803672,
0.992171854,
0.992545578,
0.992924825,
0.993309578,
0.993699816,
0.994095522,
0.994496677,
0.994903265,
0.995315266,
0.995732665,
0.996155442,
0.996583582,
0.997017068,
0.997455883,
0.99790001,
0.998349434,
0.998804138,
0.999264107,
0.999729325,
1.000199776,
1.000675446,
1.001156319,
1.001642381,
1.002133617,
1.002630011,
1.003131551,
1.003638222,
1.00415001,
1.004666901,
1.005188881,
1.005715938,
1.006248058,
1.006785227,
1.007327434,
1.007874665,
1.008426907,
1.008984149,
1.009546377,
1.010113581,
1.010685747,
1.011262865,
1.011844922,
1.012431907,
1.013023808,
1.013620615,
1.014222317,
1.014828902,
1.01544036,
1.016056681,
1.016677853,
1.017303866,
1.017934711,
1.018570378,
1.019210855,
1.019856135,
1.020506206,
1.02116106,
1.021820687,
1.022485078,
1.023154224,
1.023828116,
1.024506745,
1.025190103,
1.02587818,
1.026570969,
1.027268461,
1.027970647,
1.02867752,
1.029389072,
1.030114973,
1.030826088,
1.03155163,
1.032281819,
1.03301665,
1.033756114,
1.034500204,
1.035248913,
1.036002235,
1.036760162,
1.037522688,
1.038289806,
1.039061509,
1.039837792,
1.040618648
};
static double FastExp(double x)
{
var tmp = (long)(1512775 * x + 1072632447);
int index = (int)(tmp >> 12) & 0xFF;
return BitConverter.Int64BitsToDouble(tmp << 32) * ExpAdjustment[index];
}
static void Main(string[] args)
{
double[] x = new double[1000000];
double[] ex = new double[x.Length];
double[] fx = new double[x.Length];
Random r = new Random();
for (int i = 0; i < x.Length; ++i)
x[i] = r.NextDouble() * 40;
Stopwatch sw = new Stopwatch();
sw.Start();
for (int j = 0; j < x.Length; ++j)
ex[j] = Math.Exp(x[j]);
sw.Stop();
double builtin = sw.Elapsed.TotalMilliseconds;
sw.Reset();
sw.Start();
for (int k = 0; k < x.Length; ++k)
fx[k] = FastExp(x[k]);
sw.Stop();
double custom = sw.Elapsed.TotalMilliseconds;
double min = 1, max = 1;
for (int m = 0; m < x.Length; ++m) {
double ratio = fx[m] / ex[m];
if (min > ratio) min = ratio;
if (max < ratio) max = ratio;
}
Console.WriteLine("minimum ratio = " + min.ToString() + ", maximum ratio = " + max.ToString() + ", speedup = " + (builtin / custom).ToString());
}
}
}
The following code should address the accuracy requirements, as for inputs in [-87,88] the results have relative error <= 1.73e-3. I do not know C#, so this is C code, but I assume conversion should be fairly straightforward.
I assume that since the accuracy requirement is low, the use of single-precision computation is fine. A classic algorithm is being used in which the computation of exp() is mapped to computation of exp2(). After argument conversion via multiplication by log2(e), exponentation by the fractional part is handled using a minimax polynomial of degree 2, while exponentation by the integral part of the argument is performed by direct manipulation of the exponent part of the IEEE-754 single-precision number.
The volatile union facilitates re-interpretation of a bit pattern as either an integer or a single-precision floating-point number, needed for the exponent manipulation. It looks like C# offers decidated re-interpretation functions for this, which is much cleaner.
The two potential performance problems are the floor() function and float->int conversion. Traditionally both were slow on x86 due to the need to handle dynamic processor state. But SSE (in particular SSE 4.1) provides instructions that allow these operations to be fast. I do not know whether C# can make use of those instructions.
/* max. rel. error <= 1.73e-3 on [-87,88] */
float fast_exp (float x)
{
volatile union {
float f;
unsigned int i;
} cvt;
/* exp(x) = 2^i * 2^f; i = floor (log2(e) * x), 0 <= f <= 1 */
float t = x * 1.442695041f;
float fi = floorf (t);
float f = t - fi;
int i = (int)fi;
cvt.f = (0.3371894346f * f + 0.657636276f) * f + 1.00172476f; /* compute 2^f */
cvt.i += (i << 23); /* scale by 2^i */
return cvt.f;
}
I have studied the paper by Nicol Schraudolph where the original C implementation of the above function was defined in more detail now. It does seem that it is probably not possible to substantially approve the accuracy of the exp computation without severely impacting the performance. On the other hand, the approximation is valid also for large magnitudes of x, up to +/- 700, which is of course advantageous.
The function implementation above is tuned to obtain minimum root mean square error. Schraudolph describes how the additive term in the tmp expression can be altered to achieve alternative approximation properties.
"exp" >= exp for all x 1072693248 - (-1) = 1072693249
"exp" <= exp for all x - 90253 = 1072602995
"exp" symmetric around exp - 45799 = 1072647449
Mimimum possible mean deviation - 68243 = 1072625005
Minimum possible root-mean-square deviation - 60801 = 1072632447
He also points out that at a "microscopic" level the approximate "exp" function exhibits stair-case behavior since 32 bits are discarded in the conversion from long to double. This means that the function is piece-wise constant on a very small scale, but the function is at least never decreasing with increasing x.
I have developed for my purposes the following function that calculates quickly and accurately the natural exponent with single precision. The function works in the entire range of float values. The code is written under Visual Studio (x86).
_declspec(naked) float _vectorcall fexp(float x)
{
static const float ct[8] = // Constants table
{
1.44269502f, // lb(e)
1.92596299E-8f, // Correction to the value lb(e)
-9.21120925E-4f, // 16*b2
0.115524396f, // 4*b1
2.88539004f, // b0
4.29496730E9f, // 2^32
0.5f, // 0.5
2.32830644E-10f // 2^-32
};
_asm
{
mov ecx,offset ct // ecx contains the address of constants tables
vmulss xmm1,xmm0,[ecx] // xmm1 = x*lb(e)
vcvtss2si eax,xmm1 // eax = round(x*lb(e)) = k
vcvtsi2ss xmm1,xmm1,eax // xmm1 = k
dec eax // of=1, if eax=80000000h, i.e. overflow
jno exp_cont // Jump to form 2^k, if k is normal
vucomiss xmm0,xmm0 // Compare xmm0 with itself to identify NaN
jp exp_end // Complete with NaN result, if x=NaN
vmovd eax,xmm0 // eax contains the bits of x
test eax,eax // sf=1, if x<0; of=0 always
exp_break: // Get 0 for x<0 or Inf for x>0
vxorps xmm0,xmm0,xmm0 // xmm0 = 0
jle exp_end // Ready, if x<<0
vrcpss xmm0,xmm0,xmm0 // xmm0 = Inf at case x>>0
exp_end: // The result at xmm0 is ready
ret // Return
exp_cont: // Continue if |x| is not too big
vfmsub231ss xmm1,xmm0,[ecx] // xmm1 = x*lb(e)-k = t/2 in the range from -0,5 to 0,5
cdq // edx=-1, if x<0, othervice edx=0
vfmadd132ss xmm0,xmm1,[ecx+4] // xmm0 = t/2 (corrected value)
and edx,8 // edx=8, if x<0, othervice edx=0
vmulss xmm2,xmm0,xmm0 // xmm2 = t^2/4 - the argument of polynomial
vmovss xmm1,[ecx+8] // Initialize the sum with highest coefficient 16*b2
lea eax,[eax+8*edx+97] // The exponent of 2^(k-31), if x>0, othervice 2^(k+33)
vfmadd213ss xmm1,xmm2,[ecx+12] // xmm1 = 4*b1+4*b2*t^2
test eax,eax // Test the sign of x
jle exp_break // Result is 0 if the exponent is too small
vfmsub213ss xmm1,xmm2,xmm0 // xmm1 = -t/2+b1*t^2+b2*t^4
cmp eax,254 // Check that the exponent is not too large
ja exp_break // Jump to set Inf if overflow
vaddss xmm1,xmm1,[ecx+16] // xmm1 = b0-t/2+b1*t^2+b2*t^4 = f(t)-t/2
shl eax,23 // eax contains the bits of 2^(k-31) or 2^(k+33)
vdivss xmm0,xmm0,xmm1 // xmm0 = t/(2*f(t)-t)
vmovd xmm2,eax // xmm2 = 2^(k-31), if x>0; otherwice 2^(k+33)
vaddss xmm0,xmm0,[ecx+24] // xmm0 = t/(2*f(t)-t)+0,5
vmulss xmm0,xmm0,xmm2 // xmm0 = e^x with shifted exponent of +-32
vmulss xmm0,xmm0,[ecx+edx+20] // xmm0 = e^x with corrected exponent
ret // Return
}
}

Magnitude to Decibel always returns NaN in C#

So my question has changed from returning Infinity to returning NaN. If your FFT is always returning Infinity this may help (http://gerrybeauregard.wordpress.com/2011/04/01/an-fft-in-c/#comment-196). So I think it is returning NaN because C# is trying to get the Square Root of a negative number HOWEVER, the number should not be negative based on the code below, because I am squaring both numbers before getting the square root (which should make them positive). The number returned is negative, however. I have tried it the inefficient of using multiple variables to get re * re and im * im and the two results added together, but the results are negative as well. Math.Abs was no good either. I have contacted the creator of the FFT Class (see my link above) and am waiting for his next reply. I took some of the code below from an AS3 version of this I did before. If I get the answer from the class creator before I get one here then I will post that. Any insight is most helpful and thank you to everyone who has helped me so far in this. I am an AS3 programmer coming to C# (because it's much more capable), so it's possible I missed something simple in my newbness. I am using Unity.
private const uint LOGN = 11; // Log2 FFT Length
private const uint N = 1 << (int)LOGN; // FFT Length
private const uint BUF_LEN = N; // Audio buffer length
public FFT2 fft; // FFT Object
private double[] tempIm = new double[N]; // Temporary Imaginary Number array
private double[] m_mag = new double[N/2]; // Magnitude array
private double[] m_win = new double[N]; // Hanning Window
private int fftCount = 0; // How many times the FFT has been performed
private double SCALE = (double)20/System.Math.Log(10); // used to convert magnitude from FFT to usable dB
private double MIN_VALUE = (double)System.Double.MinValue;
...
// Hanning analysis window
for (int i = 0; i < N; i++) // for i < 2048
m_win[i] = (4.0/N) * 0.5*(1-Mathf.Cos(2*Mathf.PI*i/N)); // Hanning Vector [1] = 1 / 4595889085.750801
…
// Perform FFT
fft.run(tempRe, tempIm);
fftCount++;
// Convert from Decibel to Magnitude
for (int i = 0; i < N/2; i++) {
double re = tempRe[i]; // get the Real FFT Number at position i
double im = tempIm[i]; // get the Imaginary FFT Number at position i
m_mag[i] = Math.Sqrt(re * re + im * im); // Convert magnitude to decibels
m_mag[i] = SCALE * Math.Log(m_mag[i] + MIN_VALUE);
if (fftCount == 50 && i == 400) print ("dB # 399: " + m_mag[399]);
}
The prior code prints:
dB # 400: NaN
-5.56725062513722E+33
Thank you!!
You defined MIN_VALUE to be System.Double.MinValue, which is the smallest possible double precision number. Adding anything other than System.Double.MaxValue or positive infinity will give a negative result. Taking the logarithm of a negative number returns NaN.
I'm guessing you want MIN_VALUE to be the smallest positive number. In C#, you use System.Double.Epsilon for that. (I know... it shouldn't be called that...)
Any other tiny value, like 1e-100, will work as well.
for (int i = 0; i < tempRe.Length; i++) tempRe[i] = m_win[i];
with the multiplication it will always be zero, unless that is what you wanted.
I found the answer! So it turns out that unlike C++, AS3, and some other languages, C# has negative minimum double value.
System.Double.MIN_VALUE = -1.7976931348623157E+308;
Getting the Square root of negatives causes the NaN error. For more info see (http://www.codeproject.com/KB/cs/numprogrammingcs.aspx). I am having trouble getting an accurate dB reading now but I research and see what I can find.

Average function without overflow exception

.NET Framework 3.5.
I'm trying to calculate the average of some pretty large numbers.
For instance:
using System;
using System.Linq;
class Program
{
static void Main(string[] args)
{
var items = new long[]
{
long.MaxValue - 100,
long.MaxValue - 200,
long.MaxValue - 300
};
try
{
var avg = items.Average();
Console.WriteLine(avg);
}
catch (OverflowException ex)
{
Console.WriteLine("can't calculate that!");
}
Console.ReadLine();
}
}
Obviously, the mathematical result is 9223372036854775607 (long.MaxValue - 200), but I get an exception there. This is because the implementation (on my machine) to the Average extension method, as inspected by .NET Reflector is:
public static double Average(this IEnumerable<long> source)
{
if (source == null)
{
throw Error.ArgumentNull("source");
}
long num = 0L;
long num2 = 0L;
foreach (long num3 in source)
{
num += num3;
num2 += 1L;
}
if (num2 <= 0L)
{
throw Error.NoElements();
}
return (((double) num) / ((double) num2));
}
I know I can use a BigInt library (yes, I know that it is included in .NET Framework 4.0, but I'm tied to 3.5).
But I still wonder if there's a pretty straight forward implementation of calculating the average of integers without an external library. Do you happen to know about such implementation?
Thanks!!
UPDATE:
The previous example, of three large integers, was just an example to illustrate the overflow issue. The question is about calculating an average of any set of numbers which might sum to a large number that exceeds the type's max value. Sorry about this confusion. I also changed the question's title to avoid additional confusion.
Thanks all!!
This answer used to suggest storing the quotient and remainder (mod count) separately. That solution is less space-efficient and more code-complex.
In order to accurately compute the average, you must keep track of the total. There is no way around this, unless you're willing to sacrifice accuracy. You can try to store the total in fancy ways, but ultimately you must be tracking it if the algorithm is correct.
For single-pass algorithms, this is easy to prove. Suppose you can't reconstruct the total of all preceding items, given the algorithm's entire state after processing those items. But wait, we can simulate the algorithm then receiving a series of 0 items until we finish off the sequence. Then we can multiply the result by the count and get the total. Contradiction. Therefore a single-pass algorithm must be tracking the total in some sense.
Therefore the simplest correct algorithm will just sum up the items and divide by the count. All you have to do is pick an integer type with enough space to store the total. Using a BigInteger guarantees no issues, so I suggest using that.
var total = BigInteger.Zero
var count = 0
for i in values
count += 1
total += i
return total / (double)count //warning: possible loss of accuracy, maybe return a Rational instead?
If you're just looking for an arithmetic mean, you can perform the calculation like this:
public static double Mean(this IEnumerable<long> source)
{
if (source == null)
{
throw Error.ArgumentNull("source");
}
double count = (double)source.Count();
double mean = 0D;
foreach(long x in source)
{
mean += (double)x/count;
}
return mean;
}
Edit:
In response to comments, there definitely is a loss of precision this way, due to performing numerous divisions and additions. For the values indicated by the question, this should not be a problem, but it should be a consideration.
You may try the following approach:
let number of elements is N, and numbers are arr[0], .., arr[N-1].
You need to define 2 variables:
mean and remainder.
initially mean = 0, remainder = 0.
at step i you need to change mean and remainder in the following way:
mean += arr[i] / N;
remainder += arr[i] % N;
mean += remainder / N;
remainder %= N;
after N steps you will get correct answer in mean variable and remainder / N will be fractional part of the answer (I am not sure you need it, but anyway)
If you know approximately what the average will be (or, at least, that all pairs of numbers will have a max difference < long.MaxValue), you can calculate the average difference from that value instead. I take an example with low numbers, but it works equally well with large ones.
// Let's say numbers cannot exceed 40.
List<int> numbers = new List<int>() { 31 28 24 32 36 29 }; // Average: 30
List<int> diffs = new List<int>();
// This can probably be done more effectively in linq, but to show the idea:
foreach(int number in numbers.Skip(1))
{
diffs.Add(numbers.First()-number);
}
// diffs now contains { -3 -6 1 5 -2 }
var avgDiff = diffs.Sum() / diffs.Count(); // the average is -1
// To get the average value, just add the average diff to the first value:
var totalAverage = numbers.First()+avgDiff;
You can of course implement this in some way that makes it easier to reuse, for example as an extension method to IEnumerable<long>.
Here is how I would do if given this problem. First let's define very simple RationalNumber class, which contains two properties - Dividend and Divisor and an operator for adding two complex numbers. Here is how it looks:
public sealed class RationalNumber
{
public RationalNumber()
{
this.Divisor = 1;
}
public static RationalNumberoperator +( RationalNumberc1, RationalNumber c2 )
{
RationalNumber result = new RationalNumber();
Int64 nDividend = ( c1.Dividend * c2.Divisor ) + ( c2.Dividend * c1.Divisor );
Int64 nDivisor = c1.Divisor * c2.Divisor;
Int64 nReminder = nDividend % nDivisor;
if ( nReminder == 0 )
{
// The number is whole
result.Dividend = nDividend / nDivisor;
}
else
{
Int64 nGreatestCommonDivisor = FindGreatestCommonDivisor( nDividend, nDivisor );
if ( nGreatestCommonDivisor != 0 )
{
nDividend = nDividend / nGreatestCommonDivisor;
nDivisor = nDivisor / nGreatestCommonDivisor;
}
result.Dividend = nDividend;
result.Divisor = nDivisor;
}
return result;
}
private static Int64 FindGreatestCommonDivisor( Int64 a, Int64 b)
{
Int64 nRemainder;
while ( b != 0 )
{
nRemainder = a% b;
a = b;
b = nRemainder;
}
return a;
}
// a / b = a is devidend, b is devisor
public Int64 Dividend { get; set; }
public Int64 Divisor { get; set; }
}
Second part is really easy. Let's say we have an array of numbers. Their average is estimated by Sum(Numbers)/Length(Numbers), which is the same as Number[ 0 ] / Length + Number[ 1 ] / Length + ... + Number[ n ] / Length. For to be able to calculate this we will represent each Number[ i ] / Length as a whole number and a rational part ( reminder ). Here is how it looks:
Int64[] aValues = new Int64[] { long.MaxValue - 100, long.MaxValue - 200, long.MaxValue - 300 };
List<RationalNumber> list = new List<RationalNumber>();
Int64 nAverage = 0;
for ( Int32 i = 0; i < aValues.Length; ++i )
{
Int64 nReminder = aValues[ i ] % aValues.Length;
Int64 nWhole = aValues[ i ] / aValues.Length;
nAverage += nWhole;
if ( nReminder != 0 )
{
list.Add( new RationalNumber() { Dividend = nReminder, Divisor = aValues.Length } );
}
}
RationalNumber rationalTotal = new RationalNumber();
foreach ( var rational in list )
{
rationalTotal += rational;
}
nAverage = nAverage + ( rationalTotal.Dividend / rationalTotal.Divisor );
At the end we have a list of rational numbers, and a whole number which we sum together and get the average of the sequence without an overflow. Same approach can be taken for any type without an overflow for it, and there is no lost of precision.
EDIT:
Why this works:
Define: A set of numbers.
if Average( A ) = SUM( A ) / LEN( A ) =>
Average( A ) = A[ 0 ] / LEN( A ) + A[ 1 ] / LEN( A ) + A[ 2 ] / LEN( A ) + ..... + A[ N ] / LEN( 2 ) =>
if we define An to be a number that satisfies this: An = X + ( Y / LEN( A ) ), which is essentially so because if you divide A by B we get X with a reminder a rational number ( Y / B ).
=> so
Average( A ) = A1 + A2 + A3 + ... + AN = X1 + X2 + X3 + X4 + ... + Reminder1 + Reminder2 + ...;
Sum the whole parts, and sum the reminders by keeping them in rational number form. In the end we get one whole number and one rational, which summed together gives Average( A ). Depending on what precision you'd like, you apply this only to the rational number at the end.
Simple answer with LINQ...
var data = new[] { int.MaxValue, int.MaxValue, int.MaxValue };
var mean = (int)data.Select(d => (double)d / data.Count()).Sum();
Depending on the size of the set fo data you may want to force data .ToList() or .ToArray() before your process this method so it can't requery count on each pass. (Or you can call it before the .Select(..).Sum().)
If you know in advance that all your numbers are going to be 'big' (in the sense of 'much nearer long.MaxValue than zero), you can calculate the average of their distance from long.MaxValue, then the average of the numbers is long.MaxValue less that.
However, this approach will fail if (m)any of the numbers are far from long.MaxValue, so it's horses for courses...
I guess there has to be a compromise somewhere or the other. If the numbers are really getting so large then few digits of lower orders (say lower 5 digits) might not affect the result as much.
Another issue is where you don't really know the size of the dataset coming in, especially in stream/real time cases. Here I don't see any solution other then the
(previousAverage*oldCount + newValue) / (oldCount <- oldCount+1)
Here's a suggestion:
*LargestDataTypePossible* currentAverage;
*SomeSuitableDatatypeSupportingRationalValues* newValue;
*int* count;
addToCurrentAverage(value){
newValue = value/100000;
count = count + 1;
currentAverage = (currentAverage * (count-1) + newValue) / count;
}
getCurrentAverage(){
return currentAverage * 100000;
}
Averaging numbers of a specific numeric type in a safe way while also only using that numeric type is actually possible, although I would advise using the help of BigInteger in a practical implementation. I created a project for Safe Numeric Calculations that has a small structure (Int32WithBoundedRollover) which can sum up to 2^32 int32s without any overflow (the structure internally uses two int32 fields to do this, so no larger data types are used).
Once you have this sum you then need to calculate sum/total to get the average, which you can do (although I wouldn't recommend it) by creating and then incrementing by total another instance of Int32WithBoundedRollover. After each increment you can compare it to the sum until you find out the integer part of the average. From there you can peel off the remainder and calculate the fractional part. There are likely some clever tricks to make this more efficient, but this basic strategy would certainly work without needing to resort to a bigger data type.
That being said, the current implementation isn't build for this (for instance there is no comparison operator on Int32WithBoundedRollover, although it wouldn't be too hard to add). The reason is that it is just much simpler to use BigInteger at the end to do the calculation. Performance wise this doesn't matter too much for large averages since it will only be done once, and it is just too clean and easy to understand to worry about coming up with something clever (at least so far...).
As far as your original question which was concerned with the long data type, the Int32WithBoundedRollover could be converted to a LongWithBoundedRollover by just swapping int32 references for long references and it should work just the same. For Int32s I did notice a pretty big difference in performance (in case that is of interest). Compared to the BigInteger only method the method that I produced is around 80% faster for the large (as in total number of data points) samples that I was testing (the code for this is included in the unit tests for the Int32WithBoundedRollover class). This is likely mostly due to the difference between the int32 operations being done in hardware instead of software as the BigInteger operations are.
How about BigInteger in Visual J#.
If you're willing to sacrifice precision, you could do something like:
long num2 = 0L;
foreach (long num3 in source)
{
num2 += 1L;
}
if (num2 <= 0L)
{
throw Error.NoElements();
}
double average = 0;
foreach (long num3 in source)
{
average += (double)num3 / (double)num2;
}
return average;
Perhaps you can reduce every item by calculating average of adjusted values and then multiply it by the number of elements in collection. However, you'll find a bit different number of of operations on floating point.
var items = new long[] { long.MaxValue - 100, long.MaxValue - 200, long.MaxValue - 300 };
var avg = items.Average(i => i / items.Count()) * items.Count();
You could keep a rolling average which you update once for each large number.
Use the IntX library on CodePlex.
NextAverage = CurrentAverage + (NewValue - CurrentAverage) / (CurrentObservations + 1)
Here is my version of an extension method that can help with this.
public static long Average(this IEnumerable<long> longs)
{
long mean = 0;
long count = longs.Count();
foreach (var val in longs)
{
mean += val / count;
}
return mean;
}
Let Avg(n) be the average in first n number, and data[n] is the nth number.
Avg(n)=(double)(n-1)/(double)n*Avg(n-1)+(double)data[n]/(double)n
Can avoid value overflow however loss precision when n is very large.
For two positive numbers (or two negative numbers) , I found a very elegant solution from here.
where an average computation of (a+b)/2 can be replaced with a+((b-a)/2.

Categories

Resources