Synthesizer Slide from One Frequency to Another - c#

I'm writing a synthesizer in C# using NAudio. I'm trying to make it slide smoothly between frequencies. But I have a feeling I'm not understanding something about the math involved. It slides wildly at a high pitch before switching to the correct next pitch.
What's the mathematically correct way to slide from one pitch to another?
Here's the code:
public override int Read(float[] buffer, int offset, int sampleCount)
{
int sampleRate = WaveFormat.SampleRate;
for (int n = 0; n < sampleCount; n++)
{
if (nextFrequencyQueue.Count > 0)
{
nextFrequency = nextFrequencyQueue.Dequeue();
}
if (nextFrequency > 0 && Frequency != nextFrequency)
{
if (Frequency == 0) //special case for first note
{
Frequency = nextFrequency;
}
else //slide up or down to next frequency
{
if (Frequency < nextFrequency)
{
Frequency = Clamp(Frequency + frequencyStep, nextFrequency, Frequency);
}
if (Frequency > nextFrequency)
{
Frequency = Clamp(Frequency - frequencyStep, Frequency, nextFrequency);
}
}
}
buffer[n + offset] = (float)(Amplitude * Math.Sin(2 * Math.PI * time * Frequency));
try
{
time += (double)1 / (double)sampleRate;
}
catch
{
time = 0;
}
}
return sampleCount;
}

You are using absolute time to determine the wave function, so when you change the frequency very slightly, the next sample is what it would have been had you started the run at that new frequency.
I don't know the established best approach, but a simple approach that's probably good enough is to compute the phase (φ = t mod 1/fold) and adjust t to preserve the phase under the new frequency (t = φ/fnew).
A smoother approach would be to preserve the first derivative. This is more difficult because, unlike for the wave itself, the amplitude of the first derivative varies with frequency, which means that preserving the phase isn't sufficient. In any event, this added complexity is almost certainly overkill, given that you are varying the frequency smoothly.

One approach is to use wavetables. You construct a full cycle of a sine wave in an array, then in your Read function you can simply lookup into it. Each sample you read, you advance by an amount calculated from the desired output frequency. Then when you want to glide to a new frequency, you calculate the new delta for lookups into the table, and then instead of going straight there you adjust the delta incrementally to move to the new value over a set period of time (the 'glide' or portamento time).

Frequency = Clamp(Frequency + frequencyStep, nextFrequency, Frequency);
The human ear doesn't work like that, it is highly non-linear. Nature is logarithmic. The frequency of middle C is 261.626 Hz. The next note, C#, is related to the previous one by a factor of Math.Pow(2, 1/12.0) or about 1.0594631. So C# is 277.183 Hz, an increment of 15.557 Hz.
The next C up the scale has double the frequency, 523.252 Hz. And C# after that is 554.366 Hz, an increment of 31.084 Hz. Note how the increment doubled. So the frequencyStep in your code snippet should not be an addition, it should be a multiplication.
buffer[n + offset] = (float)(Amplitude * Math.Sin(2 * Math.PI * time * Frequency));
That's a problem as well. Your calculated samples do not smoothly transition from one frequency to the next. There's a step when "Frequency" changes. You have to apply an offset to "time" so it produces the exact same sample value at sample time "time - 1", the same value you previously calculated with the previous value of Frequency. These steps produce high frequency artifacts with many harmonics that are gratingly obvious to the human ear.
Background info is available in this Wikipedia article. It will help to visualize the wave form you generate, you would have easily diagnosed the step problem. I'll copy the Wiki image:

Related

How to get the fundamental frequency using Harmonic Product Spectrum?

I'm trying to get the pitch from the microphone input. First I have decomposed the signal from time domain to frequency domain through FFT. I have applied Hamming window to the signal before performing FFT. Then I get the complex results of FFT. Then I passed the results to Harmonic product spectrum, where the results get downsampled and then multiplied the downsampled peaks and gave a value as a complex number. Then what should I do to get the fundamental frequency?
public float[] HarmonicProductSpectrum(Complex[] data)
{
Complex[] hps2 = Downsample(data, 2);
Complex[] hps3 = Downsample(data, 3);
Complex[] hps4 = Downsample(data, 4);
Complex[] hps5 = Downsample(data, 5);
float[] array = new float[hps5.Length];
for (int i = 0; i < array.Length; i++)
{
checked
{
array[i] = data[i].X * hps2[i].X * hps3[i].X * hps4[i].X * hps5[i].X;
}
}
return array;
}
public Complex[] Downsample(Complex[] data, int n)
{
Complex[] array = new Complex[Convert.ToInt32(Math.Ceiling(data.Length * 1.0 / n))];
for (int i = 0; i < array.Length; i++)
{
array[i].X = data[i * n].X;
}
return array;
}
I have tried to get the magnitude using,
magnitude[i] = (float)Math.Sqrt(array[i] * array[i] + (data[i].Y * data[i].Y));
inside the for loop in HarmonicProductSpectrum method. Then tried to get the maximum bin using,
float max_mag = float.MinValue;
float max_index = -1;
for (int i = 0; i < array.Length / 2; i++)
if (magnitude[i] > max_mag)
{
max_mag = magnitude[i];
max_index = i;
}
and then I tried to get the frequency using,
var frequency = max_index * 44100 / 1024;
But I was getting garbage values like 1248.926, 1205,859, 2454.785 for the A4 note (440 Hz) and those values don't look like harmonics of A4.
A help would be greatly appreciated.
I implemented harmonic product spectrum in Python to make sure your data and algorithm were working nicely.
Here’s what I see when applying harmonic product spectrum to the full dataset, Hamming-windowed, with 5 downsample–multiply stages:
This is just the bottom kilohertz, but the spectrum is pretty much dead above 1 KHz.
If I chunk up the long audio clip into 8192-sample chunks (with 4096-sample 50% overlap) and Hamming-window each chunk and run HPS on it, this is the matrix of HPS. This is kind of a movie of the HPS spectrum over the entire dataset. The fundamental frequency seems to be quite stable.
The full source code is here—there’s a lot of code that helps chunk the data and visualize the output of HPS running on the chunks, but the core HPS function, starting at def hps(…, is short. But it has a couple of tricks in it.
Given the strange frequencies that you’re finding the peak at, it could be that you’re operating on the full spectrum, from 0 to 44.1 KHz? You want to only keep the “positive” frequencies, i.e., from 0 to 22.05 KHz, and apply the HPS algorithm (downsample–multiply) on that.
But assuming you start out with a positive-frequency-only spectrum, take its magnitude properly, it looks like you should get reasonable results. Try to save out the output of your HarmonicProductSpectrum to see if it’s anything like the above.
Again, the full source code is at https://gist.github.com/fasiha/957035272009eb1c9eb370936a6af2eb. (There I try out another couple of spectral estimator, Welch’s method from Scipy and my port of the Blackman-Tukey spectral estimator. I’m not sure if you are set on implementing HPS or if you would consider other pitch estimators, so I’m leaving the Welch/Blackman-Tukey results there.)
Original I wrote this as a comment but had to keep revising it because it was confusing so here’s it as a mini-answer.
Based on my brief reading of this intro to HPS, I don’t think you’re taking the magnitudes correctly after you find the four decimated responses.
You want:
array[i] = sqrt(data[i] * Complex.conjugate(data[i]) *
hps2[i] * Complex.conjugate(hps2[i]) *
hps3[i] * Complex.conjugate(hps3[i]) *
hps4[i] * Complex.conjugate(hps4[i]) *
hps5[i] * Complex.conjugate(hps5[i])).X;
This uses the sqrt(x * Complex.conjugate(x)) trick to find x’s magnitude, and then multiplies all 5 magnitudes.
(Actually, it moves the sqrt outside the product, so you only do one sqrt, saves some time, but gives the same result. So maybe that’s another trick.)
Final trick: it takes that result’s real part because sometimes due to float accuracy issues, a tiny imaginary component, like 1e-15, survives.
After you do this, array should contain just real floats, and you can apply the max-bin-finding.
If there’s no Conjugate method, then the old-fashioned way should work:
public float mag2(Complex c) { return c.X * c.X + c.Y * c.Y; }
// in HarmonicProductSpectrum
array[i] = sqrt(mag2(data[i]) * mag2(hps2[i]) * mag2(hps3[i]) * mag2(hps4[i]) * mag2(hps5[i]));
There’s algebraic flaws with the two approaches you suggested in the comments below, but the above should be correct. I’m not sure what C# does when you assign a Complex to a float—maybe it uses the real component? I’d have thought that’d be a compiler error, but with the above code, you’re doing the right thing with the complex data, and only assigning a float to array[i].
To get a pitch estimate, you have to divide your sumed bin frequency estimate by the downsampling ratio used for that sum.
Added: You should also sum the magnitudes (abs()), not take the magnitude of the complex sum.
But the harmonic product spectrum algorithm (HPS), especially when using only integer ratios of downsampling, doesn't usually provide better pitch estimation resolution. Instead, it provides a more robust rough pitch estimate (less likely to be fooled by a harmonic) than using a single bare FFT magnitude peak for sequential overtone rich timbres that have weak or missing fundamental spectral content.
If you know how to downsample a spectrum by fractional ratios (using interpolation, etc.), you can try finer grained downsampling to get a better pitch estimate out of HPS. Or you can use an HPS result to inform you of a narrower frequency range in which to search using another pitch or frequency estimation method.

Directsound logarithm volume to linear volume slider

I am developing an music player with DirectX.DirectSound. I have a problem with the volume. The directsound volume is logarithm. This means that with silent sounds, is much more sensitive to small variations in amplitude than with loud sounds. It also means that with a linear volume slider we have a logarithmic sensation of volume variations, and that just doesn't feel right. My question is: How can I make it linear?
My code until here is:
if (trkBalance.Value == trkBalance.Minimum)
{
foreGroundSound.Volume = (int)DS.Volume.Min;
}
else if (trkBalance.Value == trkBalance.Maximum)
{
foreGroundSound.Volume = (int)DS.Volume.Max;
}
else
{
foreGroundSound.Volume = (int)(-5000 * Math.Log10(100 - trkBalance.Value));
}
There is a rule of thumb to determine the perceived loudness:
A difference of 10 dB (doubleValue) results in a sound twice / half as loud as the original source.
With that in mind we can create a formula that maps the attenuation to the sound pressure level.
But at first we have to calculate the actual attenuation (as a fraction). DirectSound can attenuate a sound by 100 dB, which is an attenuation of 1/2^(100/doubleValue). This is the value for the minimum trackbar value. The maximum value is 1 (no change). So overall:
doubleValue = 10;
minimumAttenuation = 1/2^(100/doubleValue)
attenuation = minimumAttenuation + trkBalance.Value / 100 * (1 - minimumAttenuation);
Now we have a value within valid range. Now we need to find the sound pressure level for this attenuation.
And we know that the loudness doubles every 10 db (doubleValue):
attenuation = 2^(db/doubleValue) //ln
ln(attenuation) = db / doubleValue * ln(2)
db = doubleValue * ln(attenuation) / ln(2)
And since DirectSound takes hundreths dB, you can use
foreGroundSound.Volume = db * 100;
Those are just some theoretical thoughts based on wikipedia information. It might or might not work. Just try it.

Discrete Fourier transform

I am currently trying to write some fourier transform algorithm. I started with a simple DFT algorithm as described in the mathematical definition:
public class DFT {
public static Complex[] Transform(Complex[] input) {
int N = input.Length;
Complex[] output = new Complex[N];
double arg = -2.0 * Math.PI / (double)N;
for (int n = 0; n < N; n++) {
output[n] = new Complex();
for (int k = 0; k < N; k++)
output[n] += input[k] * Complex.Polar(1, arg * (double)n * (double)k);
}
return output;
}
}
So I tested this algorithm with the following code:
private int samplingFrequency = 120;
private int numberValues = 240;
private void doCalc(object sender, EventArgs e) {
Complex[] input = new Complex[numberValues];
Complex[] output = new Complex[numberValues];
double t = 0;
double y = 0;
for (int i = 0; i < numberValues; i++) {
t = (double)i / (double)samplingFrequency;
y = Math.Sin(2 * Math.PI * t);
input[i] = new Complex(y, 0);
}
output = DFT.Transform(input);
printFunc(input);
printAbs(output);
}
The transformation works fine, but only if numberValues is a multiple number of the samplingFrequency (in this case: 120, 240, 360,...). Thats my result for 240 values:
The transformation just worked fine.
If i am trying to calculate 280 values I get this result:
Why I am getting a incorrect result if I change the number of my calculated values?
I am not sure if my problem here is a problem with my code or a misunderstanding of the mathematical definition of the DFT. In either way, can anybody help me with my problem? Thanks.
What you are experiencing is called Spectral Leakage.
This is caused because the underlying mathematics of the Fourier transform assumes a continuous function from -infinity to + infinity. So the range of samples you provide is effectively repeated an infinite number of times. If you don't have a complete number of cycles of the waveform in the window the ends won't line up and you will get a discontinuity which manifests its self as the frequency smearing out to either side.
The normal way to handle this is called Windowing. However, this does come with a downside as it causes the amplitudes to be slightly off. This is the process of multiply the whole window of samples you are going to process by some function which tends towards 0 at both ends of the window causing the ends to line up but with some amplitude distortion because this process lowers the total signal power.
So to summarise there is no error in your code, and the result is as expected. The artefacts can be reduced using a window function, however this will effect the accuracy of the amplitudes. You will need to investigate and determine what solution best fits the requirements of your project.
You are NOT getting the incorrect result for a non-periodic sinusoid. And they are not just "artifacts". Your result is actually the more complete DFT result which you don't see with a periodic sinusoid. Those other non-zero values contain useful information which can be used to, for example, interpolate the frequency of a single non-periodic-in-aperture sinusoid.
A DFT can be thought of as convolving a rectangular window with your sine wave. This produces (something very close to) a Sinc function, which has infinite extent, BUT just happens to be zero at every DFT bin frequency other than its central DFT bin for any sinusoid centered exactly on a DFT bin. This happens only when the frequency is exactly periodic in the FFT aperture, not for any other. The Sinc function has lots of "humps" which are all hidden in your first plot.

Generating a Sine Sweep in C#

I want to generate a sine sweep in C# where I am able to define the start frequency, end frequency, and the duration of the sweep. I've looked at sound libraries such as DirectSound and ASIO that play a buffer. But I haven't been able to figure out how to control the duration of the sweep when the duration of the sweep is long enough to fill more than one buffer due to the buffer size limitation. Any samples or guides would be extremely helpful.
If you are satisfied with an running program without writing it yourself take a look at The Audio Test File Generator.
This small windows EXE is able to generate a linear sine sweep with a given start and end frequency.
If you want to write it by your own, you have to fill the buffer using:
sin(2*pi * f * n/sample_rate)
where
f is the current sine frequency (you want to sweep) in Hz
n is the sample index of the buffer
sample_rate is the sample rate in Hz
An example with f=10Hz.
ulrichb has already stated all necessary information but recently I had to build a sine sweep generator in .Net with C#.
It looked cool to me, I'll leave the code here, maybe it will be useful for others.
numberofSamples: Buffer size.
sweepDuration: Time takes to go from low frequency to high frequency.
lowFreq: Start frequency
highFreq: End Frequency
deltaTime: 1 / sampling rate (time taken to take 1 sample)
float sweepCurrentTime = 0.0f;
float sweepFrequencyFactor = 0.0f;
float sweepCurrentCyclePosition = 0.0f;
float sweepFrequency = 0.0f;
public void generateSineSweep(float[] buffer, int numberOfSamples, int sampleRate, int sweepDuration, float lowFreq, float highFreq)
{
float deltaTime = 1.0f / sampleRate;
for (int i = 0; i < numberOfSamples; i++)
{
sweepFrequency = lowFreq + ((highFreq - lowFreq) * sweepFrequencyFactor);
sweepCurrentCyclePosition += sweepFrequency / sampleRate;
buffer[i] = Convert.ToSingle(0.25f * Math.Sin(sweepCurrentCyclePosition * 2 * Math.PI));
if (sweepCurrentTime > sweepDuration)
{
sweepCurrentTime -= sweepDuration;
sweepCurrentTime += deltaTime;
sweepFrequencyFactor = 0.0f;
}
else
{
sweepCurrentTime += deltaTime;
sweepFrequencyFactor = sweepCurrentTime / sweepDuration;
}
}
}
The function progresses from low frequency to high frequency increasing the frequency by a certain amount after each sample.

How to simulate a harmonic oscillator driven by a given signal (not driven by sine wave)

I've got a table of values telling me how the signal level changes over time and I want to simulate a harmonic oscillator driven by this signal. It does not matter if the simulation is not 100% accurate.
I know the frequency of the oscillator.
I found lots of formulas but they all use a sine wave as driver.
I guess you want to perform some time-discrete simulation. The well-known formulae require analytic input (see Green's function). If you have a table of forces at some point in time, the typical analytical formulae won't help you too much.
The idea is this: For each point in time t0, the oscillator has some given acceleration, velocity, etc. Now a force acts on it -according to the table you were given- which will change it's acceleration (F = m * a). For the next time step t1, we assume the acceleration stays at that constant, so we can apply simple Newtonian equations (v = a * dt) with dt = (t1-t0) for this time frame. Iterate until the desired range in time is simulated.
The most important parameter of this simulation is dt, that is, how fine-grained the calculation is. For example, you might want to have 10 steps per second, but that completely depends on your input parameters. What we're doing here, in essence, is an Eulerian integration of the equations.
This, of course, isn't all there is - such simulations can be quite complicated, esp. in not-so-well behaved cases where extreme accelerations, etc. In those cases you need to perform numerical sanity checks within a frame, because something 'extreme' happens in a single frame. Also some numerical integration might become necessary, e.g. the Runge-Kutta algorithm. I guess that leads to far at this point, however.
EDIT: Just after I posted this, somebody posted a comment to the original question pointing to the "Verlet Algorithm", which is basically an implementation of what I described above.
http://en.wikipedia.org/wiki/Simple_harmonic_motion
http://en.wikipedia.org/wiki/Hooke's_Law
http://en.wikipedia.org/wiki/Euler_method
Ok, i finally figured it out and wrote a gui app to test it until it worked. But my pc is not very happy with doing it 1000*44100 times per second, even without gui^^
Whatever: here is my test code (wich worked quite well):
double lastTime;
const double deltaT = 1 / 44100.0;//length of a frame in seconds
double rFreq;
private void InitPendulum()
{
double freq = 2;//frequency in herz
rFreq = FToRSpeed(freq);
damp = Math.Pow(0.8, freq * deltaT);
}
private static double FToRSpeed(double p)
{
p *= 2;
p = Math.PI * p;
return p * p;
}
double damp;
double bHeight;
double bSpeed;
double lastchange;
private void timer1_Tick(object sender, EventArgs e)
{
double now=sw.ElapsedTicks/(double)Stopwatch.Frequency;
while (lastTime+deltaT <= now)
{
bHeight += bSpeed * deltaT;
double prevSpeed=bSpeed;
bSpeed += (mouseY - bHeight) * (rFreq*deltaT);
bSpeed *= damp;
if ((bSpeed > 0) != (prevSpeed > 0))
{
Console.WriteLine(lastTime - lastchange);
lastchange = lastTime;
}
lastTime += deltaT;
}
Invalidate();//No, i am not using gdi^^
}

Categories

Resources