I have a numerical computation method in my .NET code that will be called more than 1000 times.
private double CalculatePressureLossThroughPipe(double length, double flow, double diameter)
{
double costA = 0, costB = 0;
double frictionFactor = 0;
double pressure = 0;
double velocity = flow / CalculatePipeArea(diameter);
// calculate Reynods No
double reNo = ((this.mDensity * velocity * diameter) / this.mViscosity);
// calculate frictionFactor
costA = Math.Pow((2.457 * Math.Log(1 / (Math.Pow((7 / reynoldsNo), 0.9) + (0.27 * 0.000015) / diameter))), 16);
costB = Math.Pow((37530 / reNo), 16);
frictionFactor = (2 * Math.Pow(((Math.Pow((8 / reNo), 12)) + (1 / Math.Pow((costA + costB), 1.5))), 0.083333));
// Calulate Pressure
pressure = (DesignConstants.PRESSSURE_CONSTANT * 2 * frictionFactor * length * Math.Pow(velocity, 2) * this.mDensity / diameter);
return pressure;
}
This function will be called in a loop, with different set of input parameters. The loop itself is quite intensive which calls the above mentioned function (with unique parameters) every time.The function although looks small is quite resource intensive.Is there an alternate way to process the method calls without using the standard members from System.Math ?
It looks like the expression (Math.Pow((7/reynoldsNo), 0.9) + (0.27*0.000015)can be precalculated since it's not dependent on any of your inputs. In any case when you say this method 'is quite resource intensive' presumably you mean it takes a long time - have you benchmarked it ? What would an acceptable time be ? These are the things you need to find out before trying to optimise anything.
You could try to improve the performance using multiple threads (using Tasks / Threads) and vectorization.
Using System.Numerics you may be able to leverage the power of SIMD, possibly increasing performance 4 times.
First of all you should analyze all the mathematical expressions and
reduce the number of those that can be precalculated:
(0.27*0.000015)
also try to use multiplication instead of Math.Pow if possible: velocity * velocity would be faster than Math.Pow(velocity, 2)
if possible you can try Pow approximation algorithms - they are faster but not so precise. Look for more information this article: http://martin.ankerl.com/2007/10/04/optimized-pow-approximation-for-java-and-c-c/
Are you using Parallel class for your loop to utilize multicore/multiprocessor of your PC? https://msdn.microsoft.com/library/dd537608(v=vs.110).aspx
Related
I am currently in the process of turning a rather lofty Excel sheet that is used for calculating scientific values into a C# application. However, I am hitting some problems in regards to the rounding.
All of my values are stored as doubles, and when you perform a small number of operations on them they match the excel sheet within acceptable accuracy (5 or 6 decimal places). When they are put through rather large operations with division, multiplication, square roots. They start to drift off by quite a large margin. I switched the entire code base to decimals at another point to test if it resolved this issue, it lessened the gap but the issue still remained.
I am aware this is due to the nature of decimal numbers in software development, but it's imperative I match excels rounding as much as possible. Research on this topic points me towards the standards that excel uses to round and it seems C# by default uses a slightly different one. Despite learning of this I am still unsure of how to proceed on replicating excels rounding. I'm wondering if anyone has any advice or previous experience on this topic?
Any help would be greatly appreciated.
EDIT : I would just like to clarify that I am not rounding my numbers whatsoever. The rounding on both the sheet and my code is implicitly being applied. I have tested the same formulas inside of a totally different software package (A form builder called K2). The resulting numbers match my c# application so it seems excels implicit rounding differs in some way.
One of the offending formulas:
(8.04 * Math.Pow(10, -5)) *
(Math.Pow(preTestTestingDetails.PitotCp, 2)) * (DeltaH) *
(tempDGMAverage + 273.0) /
(StackTemp + 273) *
((preTestTestingDetails.BarometricPressure / 0.133322 +
((preTestTestingDetails.StackStaticPressure / 9.80665) / 13.6)) /
(preTestTestingDetails.BarometricPressure / 0.133322)) *
(preTestTestingDetails.EstimatedMolWeight /
((preTestTestingDetails.EstimatedMolWeight * (1 - (EstimatedMoisture / 100))) +
(18 * (EstimatedMoisture / 100)))) *
Math.Pow((1 - (EstimatedMoisture / 100)), 2) *
(Math.Pow(preTestTestingDetails.NozzleMean, 4));
In C# the result of
int x = 5;
var result = x / 2; // result is 2 and of type int
... because an integer division is performed. So if integers are involved (not a double with no decimals, but a value of type int or long), make sure to convert to double before dividing.
int x = 5;
double result = x / 2; // result is 2.0 because conversion to double is made after division
This works:
int x = 5;
var result = (double)x / 2; // result is 2.5 and of type double
int x = 5;
var result = x / 2.0; // result is 2.5 and of type double
int x = 5;
var result = 0.5 * x; // result is 2.5 and of type double
The only place in your formula where this could happen is EstimatedMoisture / 100, in case EstimatedMoisture is of type int. If this is the case, fix it with EstimatedMoisture / 100.0.
Instead of 8.04 * Math.Pow(10, -5), you can write 8.04e-5. This avoids rounding effects of Math.Pow!
I don't know how Math.Pow(a, b) works, but the general formula is a^b=exp(b*ln(a)). So instead of writing Math.Pow(something, 2), write something * something. This is both, faster and more accurate.
Using constants for magic numbers adds clarity. Using temps for common sub-expressions makes the formula more readable.
const double mmHg_to_kPa = 0.133322;
const double g0 = 9.80665;
var p = preTestTestingDetails;
double moisture = EstimatedMoisture / 100.0;
double dryness = 1.0 - moisture;
double pressure_mmHg = p.BarometricPressure / mmHg_to_kPa;
double nozzleMean2 = p.NozzleMean * p.NozzleMean;
double nozzleMean4 = nozzleMean2 * nozzleMean2;
double result = 8.04E-05 *
p.PitotCp * p.PitotCp * DeltaH * (tempDGMAverage + 273.0) / (StackTemp + 273.0) *
((pressure_mmHg + p.StackStaticPressure / g0 / 13.6) / pressure_mmHg) *
(p.EstimatedMolWeight / (p.EstimatedMolWeight * dryness + 18.0 * moisture)) *
dryness * dryness * nozzleMean4;
Why not use 273.15 instead of 273.0 if precision is a concern?
I'm trying to get the pitch from the microphone input. First I have decomposed the signal from time domain to frequency domain through FFT. I have applied Hamming window to the signal before performing FFT. Then I get the complex results of FFT. Then I passed the results to Harmonic product spectrum, where the results get downsampled and then multiplied the downsampled peaks and gave a value as a complex number. Then what should I do to get the fundamental frequency?
public float[] HarmonicProductSpectrum(Complex[] data)
{
Complex[] hps2 = Downsample(data, 2);
Complex[] hps3 = Downsample(data, 3);
Complex[] hps4 = Downsample(data, 4);
Complex[] hps5 = Downsample(data, 5);
float[] array = new float[hps5.Length];
for (int i = 0; i < array.Length; i++)
{
checked
{
array[i] = data[i].X * hps2[i].X * hps3[i].X * hps4[i].X * hps5[i].X;
}
}
return array;
}
public Complex[] Downsample(Complex[] data, int n)
{
Complex[] array = new Complex[Convert.ToInt32(Math.Ceiling(data.Length * 1.0 / n))];
for (int i = 0; i < array.Length; i++)
{
array[i].X = data[i * n].X;
}
return array;
}
I have tried to get the magnitude using,
magnitude[i] = (float)Math.Sqrt(array[i] * array[i] + (data[i].Y * data[i].Y));
inside the for loop in HarmonicProductSpectrum method. Then tried to get the maximum bin using,
float max_mag = float.MinValue;
float max_index = -1;
for (int i = 0; i < array.Length / 2; i++)
if (magnitude[i] > max_mag)
{
max_mag = magnitude[i];
max_index = i;
}
and then I tried to get the frequency using,
var frequency = max_index * 44100 / 1024;
But I was getting garbage values like 1248.926, 1205,859, 2454.785 for the A4 note (440 Hz) and those values don't look like harmonics of A4.
A help would be greatly appreciated.
I implemented harmonic product spectrum in Python to make sure your data and algorithm were working nicely.
Here’s what I see when applying harmonic product spectrum to the full dataset, Hamming-windowed, with 5 downsample–multiply stages:
This is just the bottom kilohertz, but the spectrum is pretty much dead above 1 KHz.
If I chunk up the long audio clip into 8192-sample chunks (with 4096-sample 50% overlap) and Hamming-window each chunk and run HPS on it, this is the matrix of HPS. This is kind of a movie of the HPS spectrum over the entire dataset. The fundamental frequency seems to be quite stable.
The full source code is here—there’s a lot of code that helps chunk the data and visualize the output of HPS running on the chunks, but the core HPS function, starting at def hps(…, is short. But it has a couple of tricks in it.
Given the strange frequencies that you’re finding the peak at, it could be that you’re operating on the full spectrum, from 0 to 44.1 KHz? You want to only keep the “positive” frequencies, i.e., from 0 to 22.05 KHz, and apply the HPS algorithm (downsample–multiply) on that.
But assuming you start out with a positive-frequency-only spectrum, take its magnitude properly, it looks like you should get reasonable results. Try to save out the output of your HarmonicProductSpectrum to see if it’s anything like the above.
Again, the full source code is at https://gist.github.com/fasiha/957035272009eb1c9eb370936a6af2eb. (There I try out another couple of spectral estimator, Welch’s method from Scipy and my port of the Blackman-Tukey spectral estimator. I’m not sure if you are set on implementing HPS or if you would consider other pitch estimators, so I’m leaving the Welch/Blackman-Tukey results there.)
Original I wrote this as a comment but had to keep revising it because it was confusing so here’s it as a mini-answer.
Based on my brief reading of this intro to HPS, I don’t think you’re taking the magnitudes correctly after you find the four decimated responses.
You want:
array[i] = sqrt(data[i] * Complex.conjugate(data[i]) *
hps2[i] * Complex.conjugate(hps2[i]) *
hps3[i] * Complex.conjugate(hps3[i]) *
hps4[i] * Complex.conjugate(hps4[i]) *
hps5[i] * Complex.conjugate(hps5[i])).X;
This uses the sqrt(x * Complex.conjugate(x)) trick to find x’s magnitude, and then multiplies all 5 magnitudes.
(Actually, it moves the sqrt outside the product, so you only do one sqrt, saves some time, but gives the same result. So maybe that’s another trick.)
Final trick: it takes that result’s real part because sometimes due to float accuracy issues, a tiny imaginary component, like 1e-15, survives.
After you do this, array should contain just real floats, and you can apply the max-bin-finding.
If there’s no Conjugate method, then the old-fashioned way should work:
public float mag2(Complex c) { return c.X * c.X + c.Y * c.Y; }
// in HarmonicProductSpectrum
array[i] = sqrt(mag2(data[i]) * mag2(hps2[i]) * mag2(hps3[i]) * mag2(hps4[i]) * mag2(hps5[i]));
There’s algebraic flaws with the two approaches you suggested in the comments below, but the above should be correct. I’m not sure what C# does when you assign a Complex to a float—maybe it uses the real component? I’d have thought that’d be a compiler error, but with the above code, you’re doing the right thing with the complex data, and only assigning a float to array[i].
To get a pitch estimate, you have to divide your sumed bin frequency estimate by the downsampling ratio used for that sum.
Added: You should also sum the magnitudes (abs()), not take the magnitude of the complex sum.
But the harmonic product spectrum algorithm (HPS), especially when using only integer ratios of downsampling, doesn't usually provide better pitch estimation resolution. Instead, it provides a more robust rough pitch estimate (less likely to be fooled by a harmonic) than using a single bare FFT magnitude peak for sequential overtone rich timbres that have weak or missing fundamental spectral content.
If you know how to downsample a spectrum by fractional ratios (using interpolation, etc.), you can try finer grained downsampling to get a better pitch estimate out of HPS. Or you can use an HPS result to inform you of a narrower frequency range in which to search using another pitch or frequency estimation method.
I have interprated the formula in wikipedia in c# code, i do get a nice normal curve, but is it rational to get values that exceeds 1? isnt it suppose to be a distribution function?
this is the C# implementation :
double up = Math.Exp(-Math.Pow(x , 2) / ( 2 * s * s ));
double down = ( s * Math.Sqrt(2 * Math.PI) );
return up / down;
i double checked it several times and it seems fine to me so whats wrong? my implementation or understanding?
for example if we define x=0 and s=0.1 this impl would return 3.989...
A distribution function, a pdf, has the property that its values are >= 0 and the integral of the pdf over -inf to +inf must be 1. But the integrand, that is the pdf, can take any value >= 0, including values greater than 1.
In other words, there is no reason, a priori, to believe that a pdf value > 1 indicates a problem.
You can think about this for the normal curve by considering what reducing the variance means. Smaller variance values concentrate the probability mass in the centre. Given that the total mass is always one, as the mass concentrates in the centre, the peak value must increase. You can see that trend in the graph the you link to.
What you should do is compare the output of your code with known good implementations. For instance, Wolfram Alpha gives the same value as you quote: http://www.wolframalpha.com/input/?i=normal+distribution+pdf+mean%3D0+standard+deviation%3D0.1+x%3D0&x=6&y=7
Do a little more testing of this nature, captured in a unit test, and you will be able to rely on your code with confidence.
Wouldn't you want something more like this?
public static double NormalDistribution(double value)
{
return (1 / Math.Sqrt(2 * Math.PI)) * Math.Exp(-Math.Pow(value, 2) / 2);
}
Yes, it's totally OK; The distribution itself (PDF) can be anything from 0 to +infinity; the thing should be in the range [0..1] is the corresponding integral(s) (e.g. CDF).
You can convince yourself if look at the case of non-random value: if the value is not a random at all and can have only one constant value the distribution degenerates (standard error is zero, mean is the value) into Dirac Delta Function: a peak of infinite hight but of zero width; integral however (CDF) from -infinity to +infinity is 1.
// If you have special functions implemented (i.e. Erf)
// outcoume is in [0..inf) range
public static Double NormalPDF(Double value, Double mean, Double sigma) {
Double v = (value - mean) / sigma;
return Math.Exp(-v * v / 2.0) / (sigma * Math.Sqrt(Math.PI * 2));
}
// outcome is in [0..1] range
public static Double NormalCDF(Double value, Double mean, Double sigma, Boolean isTwoTail) {
if (isTwoTail)
value = 1.0 - (1.0 - value) / 2.0;
//TODO: You should have Erf implemented
return 0.5 + Erf((value - mean) / (Math.Sqrt(2) * sigma)) / 2.0;
}
I am making a program that estimates pi through increasing the number of sides of a polygon with a given radius to an extremely high number, taking the area, and dividing it by the radius squared. I have the following:
double radius = 5;
for (double sides = 3;sides < 10000;sides++)
{
double pi_est = ((radius * radius * sides * Math.Sin((360 / sides)*(Math.PI/180))) / 2) / (radius * radius);
richTextBox1.AppendText(pi_est+"\n");
}
As of now, this takes about 5 seconds to complete. Is there anything I could re write that would improve the efficiency of my loop?
That AppendText call is going to cost a lot of time because it implies accessing the UI. Accumulate the string using a StringBuilder or an array with String.Join instead.
You should never use a double as an iteration variable; use an int instead (that's less of an efficiency problem and more of a potential gotcha).
radius*radius cancels out -- note that pi is independent of the radius used, so you can just assume the radius is equal to 1 and ignore it.
All written out:
StringBuilder sb = new StringBuilder();
for(int sides = 3; sides < 10000; sides++) {
double pi_est = sides * Math.Sin((2*Math.PI)/sides) / 2;
sb.append(pi_est + "\n");
}
richTextBox1.AppendText(sb.ToString());
For starters you could precalculate radius * radius outside the loop.
Also if not needed inside the loop update your rich text box once outside the loop and just use a StringBuilder inside.
In my code I have to do a lot of distance calculation between pairs of lat/long values.
the code looks like this:
double result = Math.Acos(Math.Sin(lat2rad) * Math.Sin(lat1rad)
+ Math.Cos(lat2rad) * Math.Cos(lat1rad) * Math.Cos(lon2rad - lon1rad));
(lat2rad e.g. is latitude converted to radians).
I have identified this function as the performance bottleneck of my application. Is there any way to improve this?
(I cannot use look-up tables since the coordinates are varying). I have also looked at this question where a lookup scheme like a grid is suggested, which might be a possibility.
Thanks for your time! ;-)
If your goal is to rank (compare) distances, then approximations (sin and cos table lookups) could drastically reduce your amount of computations required (implement quick reject.)
Your goal is to only proceed with the actual trigonometric computation if the difference between the approximated distances (to be ranked or compared) falls below a certain threshold.
E.g. using lookup tables with 1000 samples (i.e. sin and cos sampled every 2*pi/1000), the lookup uncertainty is at most 0.006284. Using uncertainty calculation for the parameter to ACos, the cumulated uncertainty, also be the threshold uncertainty, will be at most 0.018731.
So, if evaluating Math.Sin(lat2rad) * Math.Sin(lat1rad)
+ Math.Cos(lat2rad) * Math.Cos(lat1rad) * Math.Cos(lon2rad - lon1rad) using sin and cos lookup tables for two coordinate-set pairs (distances) yields a certain ranking (one distance appears greater than the other based on the approximation), and the difference's modulus is greater than the threshold above, then the approximation is valid. Otherwise proceed with the actual trigonometric calculation.
Would the CORDIC algorithm work for you (in regards to speed/accuracy)?
Using inspiration from #Brann I think you can reduce the calculation a bit (Warning its a long time since I did any of this and it will need to be verified). Some sort of lookup of precalculated values probably the fastest though
You have :
1: ACOS( SIN A SIN B + COS A COS B COS(A-B) )
but 2: COS(A-B) = SIN A SIN B + COS A COS B
which is rewritten as 3: SIN A SIN B = COS(A-B) - COS A COS B
replace SIN A SIN B in 1. you have :
4: ACOS( COS(A-B) - COS A COS B + COS A COS B COS(A-B) )
You pre-calculate X = COS(A-B) and Y = COS A COS B and you put the values into 4
to give:
ACOS( X - Y + XY )
4 trig calculations instead of 6 !
Change the way you store long/lat:
struct LongLat
{
float
long,
lat,
x,y,z;
}
When creating a long/lat, also compute the (x,y,z) 3D point that represents the equivalent position on a unit sphere centred at the origin.
Now, to determine if point B is nearer to point A than point C, do the following:
// is B nearer to A than C?
bool IsNearer (LongLat A, LongLat B, LongLat C)
{
return (A.x * B.x + A.y * B.y + A.z * B.z) < (A.x * C.x + A.y * C.y + A.z * C.z);
}
and to get the distance between two points:
float Distance (LongLat A, LongLat B)
{
// radius is the size of sphere your mapping long/lats onto
return radius * acos (A.x * B.x + A.y * B.y + A.z * B.z);
}
You could remove the 'radius' term, effectively normalising the distances.
Switching to lookup tables for sin/cos/acos. Will be faster, there are alot of c/c++ fixed point libraries that also include those.
Here is code from someone else on Memoization. Which might work if the actual values used are more clustered.
Here is an SO question on Fixed Point.
What is the bottle neck? Is the the sine/cosine function calls or the arcsine call?
If your sine/cosine calls are slow, you could use the following theorem to prevent so many calls:
1 = sin(x)^2 + cos(x)^2
cos(x) = sqrt(1 - sin(x)^2)
But I like the mapping idea so that you don't have to recompute values you've already computed. Although be careful as the map could get very large very quickly.
How exact do you need the values to be?
If you round your values a bit then you could store the result of all lookups and check if thay have been used befor each calculation?
Well, since lat and lon are garenteed to be within a certain range, you could try using some form of a lookup table for you Math.* method calls. Say, a Dictionary<double,double>
I would argue that you may want to re-examine how you found that function to be the bottleneck. (IE did you profile the application?)
The equation to me seems very light weight and shouldn't cause any trouble.
Granted, I don't know your application and you say you do a lot of these calculations.
Nevertheless it is something to consider.
As someone else pointed out, are you sure this is your bottleneck?
I've done some performance testing of a similar application I'm building where I call a simple method to return a distance between two points using standard trig. 20,000 calls to it shoves it right at the top of the profiling output, yet there's no way I can make it faster... It's just the shear # of calls.
In this case, I need to reduce the # calls to it... Not that this is the bottleneck.
I use a different algorithm for calculating distance between 2 lati/longi positions, it could be lighter than yours since it only does 1 Cos call and 1 Sqrt call.
public static double GetDistanceBetweenTwoPos(double lat1, double long1, double lat2, double long2)
{
double distance = 0;
double x = 0;
double y = 0;
x = 69.1 * (lat1 - lat2);
y = 69.1 * (long1 - long2) * System.Math.Cos(lat2 / 57.3);
//calculation base : Miles
distance = System.Math.Sqrt(x * x + y * y);
//Distance calculated in Kilometres
return distance * 1.609;
}
someone has already mentioned memoisation and this is a bit similar. if you comparing the same point to many other points then it is better to precalculate parts of that equation.
instead of
double result = Math.Acos(Math.Sin(lat2rad) * Math.Sin(lat1rad)
+ Math.Cos(lat2rad) * Math.Cos(lat1rad) * Math.Cos(lon2rad - lon1rad));
have:
double result = Math.Acos(lat2rad.sin * lat1rad.sin
+ lat2rad.cos * lat1rad.cos * (lon2rad.cos * lon1rad.cos + lon1rad.sin * lon2rad.sin));
and i think that's the same formula as someone else has posted because part of the equation will disappear when you expand the brackets:)