I want to implement the Hilbert Transform in C#. From this article I saw that the fastest FFT open source implementation seems to be the FFTW, so I downloaded that example and used it to learn how to use the fftw wrapper for C#.
I have a current signal of 200.000 points which I'm using for testing. Getting the Hilbert transform through the fft is relatively simple:
Compute the fft.
Multiply by 2 all positive frequencies except for the DC and Nyquist components (0 and n/2 + 1, if the sample size is even).
Multiply by 0 all the negative frequencies ([n/2 + 1, n]).
Compute the inverse fft.
This far, I've done all of it. The only problem is the inverse fft. I'm not able to get the same results with fftw than with the ifft from Matlab.
My code
RealArray _input;
ComplexArray _fft;
void ComputeFFT()
{
_fft = new ComplexArray(_length / 2 + 1);
_input.Set(Data);
_plan = Plan.Create1(_length, _input, _fft, Options.Estimate);
_plan.Execute();
}
This far, I've a fft with only the positive frequencies. So I don't need to multiply by zero the negative frequencies: they don't even exist. With the following code, I can get my original signal back:
double[] ComputeIFFT(ComplexArray input)
{
double[] temp = new double[_length];
RealArray output = new RealArray(_length);
_plan = Plan.Create1(_length, input, output, Options.Estimate);
_plan.Execute();
temp = output.ToArray();
for (int i = 0; i < _length; ++i)
{
temp[i] /= _length;
}
return temp;
}
The problem comes when I try to get a complex inverse from the signal.
void ComputeHilbert()
{
double[] fft = FFT.ToArray();
double[] h = new double[_length / 2 + 1];
double[] temp = new double[_length * 2];
bool fftLengthIsOdd = (_length | 1) == 1;
h[0] = 1;
for (int i = 1; i < _length / 2; i++) h[i] = 2;
if (!fftLengthIsOdd) h[(_length / 2)] = 1;
for (int i = 0; i <= _length / 2; i++)
{
temp[2 * i] = fft[2*i] * h[i];
temp[2 * i + 1] = fft[2*i + 1] * h[i];
}
ComplexArray _tempHilbert = new ComplexArray(_length);
_tempHilbert.Set(temp);
_hilbert = ComputeIFFT(_tempHilbert);
_hilbertComputed = true;
}
It's important to note that, when I do apply the ToArray() method on a ComplexArray object, I get as result a double[] with twice as length as the original array, having the real and imaginary parts consecutive. That's it, for a ComplexArray object containing "3 + 1i", I would get a double vector with [3, 1].
So, at this moment, what I have is something like:
[DC Frequency, 2*positive frequencies, Nyquist Frequency, zeros]
If I export this data to Matlab and compute the IFFT, I get the same result as its hilbert(signal).
However, if I try to apply the IFFT provided by fftw, I get weird values from Nyquist Frequency to the end (that is to say, the zeros mess with fftw).
This is the ifft I'm using to do this:
double[] ComputeIFFT(ComplexArray input)
{
double[] temp;
ComplexArray output = new ComplexArray(_length);
_plan = Plan.Create1(_length, input, output, Direction.Backward, Options.Estimate);
_plan.Execute();
temp = output.ToArray();
for (int i = 0; i < _length; ++i)
{
temp[i] /= _length;
}
return temp;
}
So, just to sum it up, my problem is the way I'm using to calculate the ifft. It doesn't seems to work well with zeros. Or maybe Matlab is capable to understand that it has to apply some different approach and I should do it manually, but I don't know how.
Thank you very much for your help in advance, much appreciated!
So the problem was the ComputeIFFT function. In the for loop, I was doing i < _length, but the length of temp array is 2 * _length, because it holds both real and imaginary values.
That's why I only got half of the values right.
The correct code for it is:
double[] ComputeIFFT(ComplexArray input)
{
double[] temp;
ComplexArray output = new ComplexArray(_length);
_plan = Plan.Create1(_length, input, output, Direction.Backward, Options.Estimate);
_plan.Execute();
temp = output.ToArray();
for (int i = 0; i < temp.Length; ++i)
{
temp[i] /= _length;
}
return temp;
}
I hope this will be useful for anyone trying to implement the Hilbert Transform through FFTW in C#.
Related
I'm taking the Coursera machine learning course right now and I cant get my gradient descent linear regression function to minimize. I use: one dependent variable, an intercept, and four values of x and y, therefore the equations are fairly simple. The final value of the Gradient Decent equation varies wildly depending on the initial values of alpha and beta and I cant figure out why.
I've only been coding for about two weeks, so my knowledge is limited to say the least, please keep this in mind if you take the time to help.
using System;
namespace LinearRegression
{
class Program
{
static void Main(string[] args)
{
Random rnd = new Random();
const int N = 4;
//We randomize the inital values of alpha and beta
double theta1 = rnd.Next(0, 100);
double theta2 = rnd.Next(0, 100);
//Values of x, i.e the independent variable
double[] x = new double[N] { 1, 2, 3, 4 };
//VAlues of y, i.e the dependent variable
double[] y = new double[N] { 5, 7, 9, 12 };
double sumOfSquares1;
double sumOfSquares2;
double temp1;
double temp2;
double sum;
double learningRate = 0.001;
int count = 0;
do
{
//We reset the Generalized cost function, called sum of squares
//since I originally used SS to
//determine if the function was minimized
sumOfSquares1 = 0;
sumOfSquares2 = 0;
//Adding 1 to counter for each iteration to keep track of how
//many iterations are completed thus far
count += 1;
//First we calculate the Generalized cost function, which is
//to be minimized
sum = 0;
for (int i = 0; i < (N - 1); i++)
{
sum += Math.Pow((theta1 + theta2 * x[i] - y[i]), 2);
}
//Since we have 4 values of x and y we have 1/(2*N) = 1 /8 = 0.125
sumOfSquares1 = 0.125 * sum;
//Then we calcualte the new alpha value, using the derivative of
//the cost function.
sum = 0;
for (int i = 0; i < (N - 1); i++)
{
sum += theta1 + theta2 * x[i] - y[i];
}
//Since we have 4 values of x and y we have 1/(N) = 1 /4 = 0.25
temp1 = theta1 - learningRate * 0.25 * sum;
//Same for the beta value, it has a different derivative
sum = 0;
for (int i = 0; i < (N - 1); i++)
{
sum += (theta1 + theta2 * x[i]) * x[i] - y[i];
}
temp2 = theta2 - learningRate * 0.25 * sum;
//WE change the values of alpha an beta at the same time, otherwise the
//function wont work
theta1 = temp1;
theta2 = temp2;
//We then calculate the cost function again, with new alpha and beta values
sum = 0;
for (int i = 0; i < (N - 1); i++)
{
sum += Math.Pow((theta1 + theta2 * x[i] - y[i]), 2);
}
sumOfSquares2 = 0.125 * sum;
Console.WriteLine("Alpha: {0:N}", theta1);
Console.WriteLine("Beta: {0:N}", theta2);
Console.WriteLine("GCF Before: {0:N}", sumOfSquares1);
Console.WriteLine("GCF After: {0:N}", sumOfSquares2);
Console.WriteLine("Iterations: {0}", count);
Console.WriteLine(" ");
} while (sumOfSquares2 <= sumOfSquares1 && count < 5000);
//we end the iteration cycle once the generalized cost function
//cannot be reduced any further or after 5000 iterations
Console.ReadLine();
}
}
}
There are two bugs in the code.
First, I assume that you would like to iterate through all the element in the array. So rework the for loop like this: for (int i = 0; i < N; i++)
Second, when updating the theta2 value the summation is not calculated well. According to the update function it should be look like this: sum += (theta1 + theta2 * x[i] - y[i]) * x[i];
Why the final values depend on the initial values?
Because the gradient descent update step is calculated from these values. If the initial values (Starting Point) are too big or too small, then it will be too far away from the final values (Final Value). You could solve this problem by:
Increasing the iteration steps (e.g. 5000 to 50000): gradient descent algorithm has more time to converge.
Decreasing the learning rate (e.g. 0.001 to 0.01): gradient descent update steps are bigger, therefore it converges faster. Note: if the learning rate is too small, then it is possible to step through the global minimum.
The slope (theta2) is around 2.5 and the intercept (theta1) is around 2.3 for the given data. I have created a github project to fix your code and i have also added a shorter solution using LINQ. It is 5 line of codes. If you are curious check it out here.
I want extract the high amplitude portion from the audio and related time. Please help me with how I can return the value of x,y axis then sort the value of to get highest point.
NAudio.Wave.WaveChannel32 wave = new NAudio.Wave.WaveChannel32(new NAudio.Wave.WaveFileReader(open.FileName));
byte[] Buffer = new byte[163840];
int read = 0;
while (wave.Position < wave.Length)
{
read = wave.Read(Buffer, 0, 16384);
}
for (int i = 0; i < read / 4; i++)
{
chart1.Series["wave"].Points.Add(BitConverter.ToSingle(Buffer, i * 4));
}
Use the following expression to return the max in your list.
chart1.Series["wave"].Points.ToList().Max()
I'm trying to change my Neural Net from using sigmoid activation for hidden and output layer to tanh function.
I'm confused what i should change. just the output calculation for the neurons or also error calculation for back propagation?
this is the output calculation:
public void calcOutput()
{
if (!isBias)
{
float sum = 0;
float bias = 0;
//System.out.println("Looking through " + connections.size() + " connections");
for (int i = 0; i < connections.Count; i++)
{
Connection c = (Connection) connections[i];
Node from = c.getFrom();
Node to = c.getTo();
// Is this connection moving forward to us
// Ignore connections that we send our output to
if (to == this)
{
// This isn't really necessary
// But I am treating the bias individually in case I need to at some point
if (from.isBias) bias = from.getOutput()*c.getWeight();
else sum += from.getOutput()*c.getWeight();
}
}
// Output is result of sigmoid function
output = Tanh(bias+sum);
}
}
it works great for how i trained it before, but now i want want to train it to give 1 or -1 as output.
when i change
output = Sigmoid(bias+sum);
to
output = Tanh(bias+sum);
the result are all messed up...
Sigmoid:
public static float Sigmoid(float x)
{
return 1.0f / (1.0f + (float) Mathf.Exp(-x));
}
Tanh:
public float Tanh(float x)
{
//return (float)(Mathf.Exp(x) - Mathf.Exp(-x)) / (Mathf.Exp(x) + Mathf.Exp(-x));
//return (float)(1.7159f * System.Math.Tanh(2/3 * x));
return (float)System.Math.Tanh(x);
}
as you can see i tried different formula i found for tanh but none the outputs make sense, i get -1 where i ask 0 or 0.76159 where i ask 1 or it keeps flipping between a positive and a negative number when asking -1 and other mismatches...
-EDIT- updated currently working code (changed the above calcOuput to what i use now):
public float[] train(float[] inputs, float[] answer)
{
float[] result = feedForward(inputs);
deltaOutput = new float[result.Length];
for(int ii=0; ii<result.Length; ii++)
{
deltaOutput[ii] = 0.66666667f * (1.7159f - (result[ii]*result[ii])) * (answer[ii]-result[ii]);
}
// BACKPROPOGATION
for(int ii=0; ii<output.Length; ii++)
{
ArrayList connections = output[ii].getConnections();
for (int i = 0; i < connections.Count; i++)
{
Connection c = (Connection) connections[i];
Node node = c.getFrom();
float o = node.getOutput();
float deltaWeight = o*deltaOutput[ii];
c.adjustWeight(LEARNING_CONSTANT*deltaWeight);
}
}
// ADJUST HIDDEN WEIGHTS
for (int i = 0; i < hidden.Length; i++)
{
ArrayList connections = hidden[i].getConnections();
//Debug.Log(connections.Count);
float sum = 0;
// Sum output delta * hidden layer connections (just one output)
for (int j = 0; j < connections.Count; j++)
{
Connection c = (Connection) connections[j];
// Is this a connection from hidden layer to next layer (output)?
if (c.getFrom() == hidden[i])
{
for(int k=0; k<deltaOutput.Length; k++)
sum += c.getWeight()*deltaOutput[k];
}
}
// Then adjust the weights coming in based:
// Above sum * derivative of sigmoid output function for hidden neurons
for (int j = 0; j < connections.Count; j++)
{
Connection c = (Connection) connections[j];
// Is this a connection from previous layer (input) to hidden layer?
if (c.getTo() == hidden[i])
{
float o = hidden[i].getOutput();
float deltaHidden = o * (1 - o); // Derivative of sigmoid(x)
deltaHidden *= sum;
Node node = c.getFrom();
float deltaWeight = node.getOutput()*deltaHidden;
c.adjustWeight(LEARNING_CONSTANT*deltaWeight);
}
}
}
return result;
}
I'm confused what i should change. just the output calculation for the neurons or also error calculation for back propagation? this is the output calculation:
You should be using the derivative of the sigmoid function somewhere in your backpropagation code. You will also need to replace that with the derivative of the tanh function, which is 1 - (tanh(x))^2.
Your code looks like C#. I get this:
Console.WriteLine(Math.Tanh(0)); // prints 0
Console.WriteLine(Math.Tanh(-1)); // prints -0.761594155955765
Console.WriteLine(Math.Tanh(1)); // prints 0.761594155955765
Console.WriteLine(Math.Tanh(0.234)); // prints 0.229820548214317
Console.WriteLine(Math.Tanh(-4)); // prints -0.999329299739067
Which is in line with the tanh plot:
I think you're reading the results wrong: you get the correct answer for 1. Are you sure you get -1 for tanh(0)?
If you're sure there's a problem, please post more code.
I am a newcomer to the sound programming. I have a real-time sound visualizer(http://www.codeproject.com/Articles/20025/Sound-visualizer-in-C). I downloaded it from codeproject.com.
In AudioFrame.cs class there is an array as below:
_fftLeft = FourierTransform.FFTDb(ref _waveLeft);
_fftLeft is a double array. _waveLeft is also a double array. As above they applied
FouriorTransform.cs class's FFTDb function to a _waveLeft array.
Here is FFTDb function:
static public double[] FFTDb(ref double[] x)
{
n = x.Length;
nu = (int)(Math.Log(n) / Math.Log(2));
int n2 = n / 2;
int nu1 = nu - 1;
double[] xre = new double[n];
double[] xim = new double[n];
double[] decibel = new double[n2];
double tr, ti, p, arg, c, s;
for (int i = 0; i < n; i++)
{
xre[i] = x[i];
xim[i] = 0.0f;
}
int k = 0;
for (int l = 1; l <= nu; l++)
{
while (k < n)
{
for (int i = 1; i <= n2; i++)
{
p = BitReverse(k >> nu1);
arg = 2 * (double)Math.PI * p / n;
c = (double)Math.Cos(arg);
s = (double)Math.Sin(arg);
tr = xre[k + n2] * c + xim[k + n2] * s;
ti = xim[k + n2] * c - xre[k + n2] * s;
xre[k + n2] = xre[k] - tr;
xim[k + n2] = xim[k] - ti;
xre[k] += tr;
xim[k] += ti;
k++;
}
k += n2;
}
k = 0;
nu1--;
n2 = n2 / 2;
}
k = 0;
int r;
while (k < n)
{
r = BitReverse(k);
if (r > k)
{
tr = xre[k];
ti = xim[k];
xre[k] = xre[r];
xim[k] = xim[r];
xre[r] = tr;
xim[r] = ti;
}
k++;
}
for (int i = 0; i < n / 2; i++)
decibel[i] = 10.0 * Math.Log10((float)(Math.Sqrt((xre[i] * xre[i]) + (xim[i] * xim[i]))));
return decibel;
}
When I play a music note in a guitar i wanted to know it's frequency in a numerical format. I wrote a foreach loop to know what is the output of a _fftLeft array as below,
foreach (double myarray in _fftLeft)
{
Console.WriteLine(myarray );
}
This output's contain lots of real-time values as below .
41.3672743963389
,43.0176034462662,
35.3677383746087,
42.5968946936404,
42.0600935794783,
36.7521669642071,
41.6356709559342,
41.7189032845742,
41.1002451261724,
40.8035583510188,
45.604366914128,
39.645552593115
I want to know what are those values (frequencies or not)? if the answer is frequencies then why it contains low frequency values? And when I play a guitar note I want to detect a frequency of that particular guitar note.
Based on the posted code, FFTDb first computes the FFT then computes and returns the magnitudes of the frequency spectrum in the logarithmic decibels scale. In other words, the _fftLeft then contains magnitudes for a discreet set of frequencies. The actual values of those frequencies can be computed using the array index and sampling frequency according to this answer.
As an example, if you were plotting the _fftLeft output for a pure sinusoidal tone input you should be able to see a clear spike in the index corresponding to the sinusoidal frequency. For a guitar note however you are likely going to see multiple spikes in magnitude corresponding to the harmonics. To detect the note's frequency aka pitch is a more complicated topic and typically requires the use of one of several pitch detection algorithms.
I have a datastructure with a field of the float-type. A collection of these structures needs to be sorted by the value of the float. Is there a radix-sort implementation for this.
If there isn't, is there a fast way to access the exponent, the sign and the mantissa.
Because if you sort the floats first on mantissa, exponent, and on exponent the last time. You sort floats in O(n).
Update:
I was quite interested in this topic, so I sat down and implemented it (using this very fast and memory conservative implementation). I also read this one (thanks celion) and found out that you even dont have to split the floats into mantissa and exponent to sort it. You just have to take the bits one-to-one and perform an int sort. You just have to care about the negative values, that have to be inversely put in front of the positive ones at the end of the algorithm (I made that in one step with the last iteration of the algorithm to save some cpu time).
So heres my float radixsort:
public static float[] RadixSort(this float[] array)
{
// temporary array and the array of converted floats to ints
int[] t = new int[array.Length];
int[] a = new int[array.Length];
for (int i = 0; i < array.Length; i++)
a[i] = BitConverter.ToInt32(BitConverter.GetBytes(array[i]), 0);
// set the group length to 1, 2, 4, 8 or 16
// and see which one is quicker
int groupLength = 4;
int bitLength = 32;
// counting and prefix arrays
// (dimension is 2^r, the number of possible values of a r-bit number)
int[] count = new int[1 << groupLength];
int[] pref = new int[1 << groupLength];
int groups = bitLength / groupLength;
int mask = (1 << groupLength) - 1;
int negatives = 0, positives = 0;
for (int c = 0, shift = 0; c < groups; c++, shift += groupLength)
{
// reset count array
for (int j = 0; j < count.Length; j++)
count[j] = 0;
// counting elements of the c-th group
for (int i = 0; i < a.Length; i++)
{
count[(a[i] >> shift) & mask]++;
// additionally count all negative
// values in first round
if (c == 0 && a[i] < 0)
negatives++;
}
if (c == 0) positives = a.Length - negatives;
// calculating prefixes
pref[0] = 0;
for (int i = 1; i < count.Length; i++)
pref[i] = pref[i - 1] + count[i - 1];
// from a[] to t[] elements ordered by c-th group
for (int i = 0; i < a.Length; i++){
// Get the right index to sort the number in
int index = pref[(a[i] >> shift) & mask]++;
if (c == groups - 1)
{
// We're in the last (most significant) group, if the
// number is negative, order them inversely in front
// of the array, pushing positive ones back.
if (a[i] < 0)
index = positives - (index - negatives) - 1;
else
index += negatives;
}
t[index] = a[i];
}
// a[]=t[] and start again until the last group
t.CopyTo(a, 0);
}
// Convert back the ints to the float array
float[] ret = new float[a.Length];
for (int i = 0; i < a.Length; i++)
ret[i] = BitConverter.ToSingle(BitConverter.GetBytes(a[i]), 0);
return ret;
}
It is slightly slower than an int radix sort, because of the array copying at the beginning and end of the function, where the floats are bitwise copied to ints and back. The whole function nevertheless is again O(n). In any case much faster than sorting 3 times in a row like you proposed. I dont see much room for optimizations anymore, but if anyone does: feel free to tell me.
To sort descending change this line at the very end:
ret[i] = BitConverter.ToSingle(BitConverter.GetBytes(a[i]), 0);
to this:
ret[a.Length - i - 1] = BitConverter.ToSingle(BitConverter.GetBytes(a[i]), 0);
Measuring:
I set up some short test, containing all special cases of floats (NaN, +/-Inf, Min/Max value, 0) and random numbers. It sorts exactly the same order as Linq or Array.Sort sorts floats:
NaN -> -Inf -> Min -> Negative Nums -> 0 -> Positive Nums -> Max -> +Inf
So i ran a test with a huge array of 10M numbers:
float[] test = new float[10000000];
Random rnd = new Random();
for (int i = 0; i < test.Length; i++)
{
byte[] buffer = new byte[4];
rnd.NextBytes(buffer);
float rndfloat = BitConverter.ToSingle(buffer, 0);
switch(i){
case 0: { test[i] = float.MaxValue; break; }
case 1: { test[i] = float.MinValue; break; }
case 2: { test[i] = float.NaN; break; }
case 3: { test[i] = float.NegativeInfinity; break; }
case 4: { test[i] = float.PositiveInfinity; break; }
case 5: { test[i] = 0f; break; }
default: { test[i] = test[i] = rndfloat; break; }
}
}
And stopped the time of the different sorting algorithms:
Stopwatch sw = new Stopwatch();
sw.Start();
float[] sorted1 = test.RadixSort();
sw.Stop();
Console.WriteLine(string.Format("RadixSort: {0}", sw.Elapsed));
sw.Reset();
sw.Start();
float[] sorted2 = test.OrderBy(x => x).ToArray();
sw.Stop();
Console.WriteLine(string.Format("Linq OrderBy: {0}", sw.Elapsed));
sw.Reset();
sw.Start();
Array.Sort(test);
float[] sorted3 = test;
sw.Stop();
Console.WriteLine(string.Format("Array.Sort: {0}", sw.Elapsed));
And the output was (update: now ran with release build, not debug):
RadixSort: 00:00:03.9902332
Linq OrderBy: 00:00:17.4983272
Array.Sort: 00:00:03.1536785
roughly more than four times as fast as Linq. That is not bad. But still not yet that fast as Array.Sort, but also not that much worse. But i was really surprised by this one: I expected it to be slightly slower than Linq on very small arrays. But then I ran a test with just 20 elements:
RadixSort: 00:00:00.0012944
Linq OrderBy: 00:00:00.0072271
Array.Sort: 00:00:00.0002979
and even this time my Radixsort is quicker than Linq, but way slower than array sort. :)
Update 2:
I made some more measurements and found out some interesting things: longer group length constants mean less iterations and more memory usage. If you use a group length of 16 bits (only 2 iterations), you have a huge memory overhead when sorting small arrays, but you can beat Array.Sort if it comes to arrays larger than about 100k elements, even if not very much. The charts axes are both logarithmized:
(source: daubmeier.de)
There's a nice explanation of how to perform radix sort on floats here:
http://www.codercorner.com/RadixSortRevisited.htm
If all your values are positive, you can get away with using the binary representation; the link explains how to handle negative values.
By doing some fancy casting and swapping arrays instead of copying this version is 2x faster for 10M numbers as Philip Daubmeiers original with grouplength set to 8. It is 3x faster as Array.Sort for that arraysize.
static public void RadixSortFloat(this float[] array, int arrayLen = -1)
{
// Some use cases have an array that is longer as the filled part which we want to sort
if (arrayLen < 0) arrayLen = array.Length;
// Cast our original array as long
Span<float> asFloat = array;
Span<int> a = MemoryMarshal.Cast<float, int>(asFloat);
// Create a temp array
Span<int> t = new Span<int>(new int[arrayLen]);
// set the group length to 1, 2, 4, 8 or 16 and see which one is quicker
int groupLength = 8;
int bitLength = 32;
// counting and prefix arrays
// (dimension is 2^r, the number of possible values of a r-bit number)
var dim = 1 << groupLength;
int groups = bitLength / groupLength;
if (groups % 2 != 0) throw new Exception("groups must be even so data is in original array at end");
var count = new int[dim];
var pref = new int[dim];
int mask = (dim) - 1;
int negatives = 0, positives = 0;
// counting elements of the 1st group incuding negative/positive
for (int i = 0; i < arrayLen; i++)
{
if (a[i] < 0) negatives++;
count[(a[i] >> 0) & mask]++;
}
positives = arrayLen - negatives;
int c;
int shift;
for (c = 0, shift = 0; c < groups - 1; c++, shift += groupLength)
{
CalcPrefixes();
var nextShift = shift + groupLength;
//
for (var i = 0; i < arrayLen; i++)
{
var ai = a[i];
// Get the right index to sort the number in
int index = pref[( ai >> shift) & mask]++;
count[( ai>> nextShift) & mask]++;
t[index] = ai;
}
// swap the arrays and start again until the last group
var temp = a;
a = t;
t = temp;
}
// Last round
CalcPrefixes();
for (var i = 0; i < arrayLen; i++)
{
var ai = a[i];
// Get the right index to sort the number in
int index = pref[( ai >> shift) & mask]++;
// We're in the last (most significant) group, if the
// number is negative, order them inversely in front
// of the array, pushing positive ones back.
if ( ai < 0) index = positives - (index - negatives) - 1; else index += negatives;
//
t[index] = ai;
}
void CalcPrefixes()
{
pref[0] = 0;
for (int i = 1; i < dim; i++)
{
pref[i] = pref[i - 1] + count[i - 1];
count[i - 1] = 0;
}
}
}
You can use an unsafe block to memcpy or alias a float * to a uint * to extract the bits.
I think your best bet if the values aren't too close and there's a reasonable precision requirement, you can just use the actual float digits before and after the decimal point to do the sorting.
For example, you can just use the first 4 decimals (be they 0 or not) to do the sorting.