How to get peak power of sound in Unity C# - c#

After record a sound in Unity, is it possible to get the peak power of the sound? Or is it have any way to calculate the peak power of a sound?

Peak isn't very interesting in sound. If you want something closer to perceived volume of the sound, one pretty good metric is RMS. To get this, you have to do just a bit of math:
Load the sample data using audio.GetOutputData
Sum squares of all the sampled values
Take the square root of sum / amountOfSamples - that's RMS (root-mean-square)
If you want to have a value in dB, you can get it as 20 * log10(rms / reference), where reference stands for the value you want to have at 0 dB. A good reference point is 0.1, for example. Note that the RMS value will always be from 0 to 1, while dB values are a bit wilder - they better approximate human hearing, though. If you want to be really serious, different frequencies are perceived at different volumes - have a look at dBA, for example.

Related

sound analyzer using naudio for 48000 samples/sec sound. Can I use a cycle-sample-size of 1024?

I need to create a sound analyzer to isolate certain song frequencies. For now, I'm interested in bass (60-250Hz).
I read the signal (IEEE float), for each block of 1024: do a FFT, and then extract the value corresponding to each frequency.
What I don't understand is this: I know FFT needs powers of 2 in order to work. I've seen code using blocks of 512, code using 2048, 4096 and so on.
I've settled on 1024 (which gives me roughly 47 datapoints/second). Am I correct in assuming that using, 2048, for instance will work just the same, giving me 23.5 datapoints/second, and the only difference is accuracy (and speed of computation of course)?
Also, am I required to read at 1024-boundary blocks? Like, for instance, say I simply skip the first 200 floats, will the results end up being very similar? (my tests seem to say yes)
LATER EDIT: updated title to make it easier to understand
1024/48kHz is barely longer than one period of a 60 Hz signal. Too short to determine if the signal is even fully periodic (repeats). Humans typically require somewhere around 6 periods of repetitions to hear a sound as a having a definite pitch.
60 Hz is B1. You might need 2 Hz resolution to separate B1 from C1 with a clear gap in between the two nearest FFT frequency bins. To do that, just using FFT magnitude results, would require an FFT of 48kHz/2Hz or a half second, or longer. The nearest power of 2, for 48ksps samples, is 32768.
For music pitch frequencies, there are much better pitch detector/estimators than using a bare FFT or FFT frequency peak magnitude, as they solve the missing or weak fundamental issue common in recorded instrumental or vocal music. Those pitch estimators can work with shorter time interval windows than a half second, but require more computation than a bare FFT magnitude peak picking.

Solving for Amplitude and Frequency in WAV files

I've also asked this here on the Sound Design forum, but the question is heavy computer science/math so it might actually belong on this forum:
So I'm able to successfully find all the information about a WAV file except the amplitude and the frequency (hertz) of the big sin function by reading the binaries in the file (which are unfortunately exactly what I'm looking for). Just to verify what I'm talking about, the file generates one wave only with the equation:
F(s) = A * sin(T * s)
Where s is the current sample, A is the amplitude and T is the period. Now the equation for the T (period) is:
T = (2π * Hz) /(α * ω)
Where Hz is frequency in Hertz, α is Samples per second, and ω is the amount of channels.
Now I know that to solve for amplitude, I could simply find the value of F(s) where
s = (π/2)/T
Because then the value of the sine function would be 1, and the final value would be equivalent to A. The problem is that to divide by T, I have to know the Hertz (or Hz).
Is there any way that I can read a WAV file to discover the Hertz from the data, assuming the file only contains a single wave.
Just to get some terms clarified, the property you're looking for is frequency, and the unit of frequency is Hertz (once per second). By convention, the typical A note has a frequency of 440 Hz.
You got the function wrong, actually. That sine wave in reality has the form F(s) = A * sin(2*pi*s/T + c) - you don't know when it started so you get a constant c in there. Also, you need to divide by T, not multiply.
Getting the amplitude is actually fairly easy. That sine wave has a series op peaks and valleys. Find each peak (higher than both neighbors) and each valley (lower), calculate the average peak and average valley, and the amplitude is TWICE the difference between the two. Pretty easy. The period T can be estimated by counting the average distance from peak to peak, and from valley to valley.
There's one bit where you need to be careful. If there is a slight bit of noise, you may get a slight dent near a peak. Instead of 14 17 18 17 14 you may get 14 17 16 17 14. That 16 isn't a valley. Once you've got a good estimate for the real peaks and valleys, throw out all the distorted peaks.
The question isn't "what frequency?". If your function is anything other than a simple trig function it'll be a combination of frequencies, each with their own amplitude.
The correct approach is digital signal processing using finite Fourier transform. You've got a lot of digging to do.
If you only want to assume a single trig function, you have just 2 (amplitude and frequency) or 3 degrees of freedom (amplitude, frequency, and phase angle) and N time points in the file. That means least squares fitting assuming a sine or cosine function.

given 10 functions y=a+bx and 1000's of (x,y) data points rounded to ints, how to derive 10 best (a,b) tuples?

We build software that audits fees charged by banks to merchants that accept credit and debit cards. Our customers want us to tell them if the card processor is overcharging them. Per-transaction credit card fees are calculated like this:
fee = fixed + variable*transaction_price
A "fee scheme" is the pair of (fixed, variable) used by a group of credit cards, e.g. "MasterCard business debit gold cards issued by First National Bank of Hollywood". We believe there are fewer than 10 different fee schemes in use at any time, but we aren't getting a complete nor current list of fee schemes from our partners. (yes, I know that some "fee schemes" are more complicated than the equation above because of caps and other gotchas, but our transactions are known to have only a + bx schemes in use).
Here's the problem we're trying to solve: we want to use per-transaction data about fees to derive the fee schemes in use. Then we can compare that list to the fee schemes that each customer should be using according to their bank.
The data we get about each transaction is a data tuple: (card_id, transaction_price, fee).
transaction_price and fee are in integer cents. The bank rolls over fractional cents for each transation until the cumulative is greater than one cent, and then a "rounding cent" will be attached to the fees of that transaction. We cannot predict which transaction the "rounding cent" will be attached to.
card_id identifies a group of cards that share the same fee scheme. In a typical day of 10,000 transactions, there may be several hundred unique card_id's. Multiple card_id's will share a fee scheme.
The data we get looks like this, and what we want to figure out is the last two columns.
card_id transaction_price fee fixed variable
=======================================================================
12345 200 22 ? ?
67890 300 21 ? ?
56789 150 8 ? ?
34567 150 8 ? ?
34567 150 "rounding cent"-> 9 ? ?
34567 150 8 ? ?
The end result we want is a short list like this with 10 or fewer entries showing the fee schemes that best fit our data. Like this:
fee_scheme_id fixed variable
======================================
1 22 0
2 21 0
3 ? ?
4 ? ?
...
The average fee is about 8 cents. This means the rounding cents have a huge impact and the derivation above requires a lot of data.
The average transaction is 125 cents. Transaction prices are always on 5-cent boundaries.
We want a short list of fee schemes that "fit" 98%+ of the 3,000+ transactions each customer gets each day. If that's not enough data to achieve 98% confidence, we can use multiple days' of data.
Because of the rounding cents applied somewhat arbitrarily to each transaction, this isn't a simple algebra problem. Instead, it's a kind of statistical clustering exercise that I'm not sure how to solve.
Any suggestions for how to approach this problem? The implementation can be in C# or T-SQL, whichever makes the most sense given the algorithm.
Hough transform
Consider your problem in image terms: If you would plot your input data on a diagram of price vs. fee, each scheme's entries would form a straight line (with rounding cents being noise). Consider the density map of your plot as an image, and the task is reduced to finding straight lines in an image. Which is just the job of the Hough transform.
You would essentially approach this by plotting one line for each transaction into a diagram of possible fixed fee versus possible variable fee, adding the values of lines where they cross. At the points of real fee schemes, many lines will intersect and form a large local maximum. By detecting this maximum, you find your fee scheme, and even a degree of importance for the fee scheme.
This approach will surely work, but might take some time depending on the resolution you want to achieve. If computation time proves to be an issue, remember that a Voronoi diagram of a coarse Hough space can be used as a classificator - and once you have classified your points into fee schemes, simple linear regression solves your problem.
Considering, that a processing query's storage requirements are in the same power of 2 as a day's worth of transaction data, I assume that such storage is not a problem, so:
First pass: Group the transactions for each card_id by transaction_price, keeping card_id, transaction_price and average fee. This can easily be done in SQL. This assumes, there are not outliers - but you can catch those at after this stage if so required. The resulting number of rows is guaranteed to be no higher than the number of raw data points.
Second pass: Per group walk these new data points (with a cursor or in C#) and calculate the average value of b. Again any outliers can be caught if desired after this stage.
Third pass: Per group calculate the average value of a, now that b is known. This is basic SQL. Outliers as allways
If you decide to do the second step in a cursor you can stuff all that into a stored procedure.
Different card_id groups, that use the same fee scheme can now be coalesced (Sorry of this is the wrong word, non-english native) into fee schemes by rounding a and b with a sane precision and again grouping.
The Hough transform is the most general answer, though I don't know how one would implement it in SQL (rather than pulling the data out and processing it in a general purpose language of your choice).
Alas, the naive version is known to be slow if you have a lot of input data (1000 points is kinda medium sized) and if you want high precision results (scales as size_of_the_input / (rho_precision * theta_precision)).
There is a faster approach based on 2^n-trees, but there are few implementations out on the web to just plug in. (I recently did one in C++ as a testbed for a project I'm involved in. Maybe I'll clean it up and post it somewhere.)
If there is some additional order to the data you may be able to do better (i.e. do the line segments form a piecewise function?).
Naive Hough transform
Define an accumulator in (theta,rho) space spanning [-pi,pi) and [0,max(hypotenuse(x,y)] as an 2D-array.
Foreach point in the input data
Foreach bin in theta
find the distance rho of the altitude from the origin to
a line through (a,y) and making angle theta with the horizontal
rho = x cos(theta) + y sin(theta)
and increment the bin (theta,rho) in the accumulator
Find the maximum bin in the accumulator, this
represents the most line-like structure in the data
if (theta !=0) {a = rho/sin(theta); b = -1/tan(theta);}
Reliably getting multiple lines out of a single pass takes a little more bookkeeping, but it is not significantly harder.
You can improve the result a little by smoothing the data near the candidate peaks and fitting to get sub-bin precision which should be faster than using smaller bins and should pickup the effect of the "rounding" cents fairly smoothly.
You're looking at the rounding cent as a significant source of noise in your calculations, so I'd focus on minimizing the noise due to that issue. The easiest way to do this IMO is to increase the sample size.
Instead of viewing your data as thousands of y=mx + b (+Rounding) group your data into larger subsets:
If you combine X transactions with the same and look at this as (sum of X fees) = (variable rate)*(sum of X transactions) + X(base rates) (+Rounding) your rounding number the noise will likely fall to the wayside.
Get enough groups of size 'X' and you should be able to come up with a pretty close representation of the real numbers.

Bell Curve Gaussian Algorithm (Python and/or C#)

Here's a somewhat simplified example of what I am trying to do.
Suppose I have a formula that computes credit points, but the formula has no constraints (for example, the score might be 1 to 5000). And a score is assigned to 100 people.
Now, I want to assign a "normalized" score between 200 and 800 to each person, based on a bell curve. So for example, if one guy has 5000 points, he might get an 800 on the new scale. The people with the middle of my point range will get a score near 500. In other words, 500 is the median?
A similar example might be the old scenario of "grading on the curve", where a the bulk of the students perhaps get a C or C+.
I'm not asking for the code, either a library, an algorithm book or a website to refer to.... I'll probably be writing this in Python (but C# is of some interest as well). There is NO need to graph the bell curve. My data will probably be in a database and I may have even a million people to which to assign this score, so scalability is an issue.
Thanks.
The important property of the bell curve is that it describes normal distribution, which is a simple model for many natural phenomena. I am not sure what kind of "normalization" you intend to do, but it seems to me that current score already complies with normal distribution, you just need to determine its properties (mean and variance) and scale each result accordingly.
References:
https://en.wikipedia.org/wiki/Grading_on_a_curve
https://en.wikipedia.org/wiki/Percentile
(see also: gaussian function)
I think the approach that I would try would be to compute the mean (average) and standard deviation (average distance from the average). I would then choose parameters to fit to my target range. Specifically, I would choose that the mean of the input values map to the value 500, and I would choose that 6 standard deviations consume 99.7% of my target range. Or, a single standard deviation will occupy about 16.6% of my target range.
Since your target range is 600 (from 200 to 800), a single standard deviation would cover 99.7 units. So a person who obtains an input credit score that is one standard deviation above the input mean would get a normalized credit score of 599.7.
So now:
# mean and standard deviation of the input values has been computed.
for score in input_scores:
distance_from_mean = score - mean
distance_from_mean_in_standard_deviations = distance_from_mean / stddev
target = 500 + distance_from_mean_in_standard_deviations * 99.7
if target < 200:
target = 200
if target > 800:
target = 800
This won't necessarily map the median of your input scores to 500. This approach assumes that your input is more-or-less normally distributed and simply translates the mean and stretches the input bell curve to fit in your range. For inputs that are significantly not bell curve shaped, this may distort the input curve rather badly.
A second approach is to simply map your input range to our output range:
for score in input_scores:
value = (score - 1.0) / (5000 - 1)
target = value * (800 - 200) + 200
This will preserve the shape of your input, but in your new range.
A third approach is to have your target range represent percentiles instead of trying to represent a normal distribution. 1% of people would score between 200 and 205; 1% would score between 794 and 800. Here you would rank your input scores and convert the ranks into a value in the range 200..600. This makes full use of your target range and gives it an easy to understand interpretation.

How can find an peak in an array?

I am making a pitch detection program using fft. To get the pitch I need to find the lowest frequency that is significantly above the noise floor.
All the results are in an array. Each position is for a frequency. I don't have any idea how to find the peak.
I am programming in C#.
Here is a screenshot of the frequency analysis in audacity.
Instead of attempting to find the lowest peak, I would look for a fundamental frequency which maximizes the spectral energy captured by its first 5 integer multiples. Note that every peak is an integer multiple of the lowest peak. This is a hack of the cepstrum method. Don't judge :).
N.B. From your plots, I assume a 1024 sample window and 44.1kHZ sampling Rate. This yields a frequency granularity of only 44.1kHz/1024 = 43Hz. Given a 44.1kHz audio, I recommend using a longer analysis window of ~50 ms or 2048 samples. This would yield a finer frequency granularity of ~21 Hz.
Assuming a Matlab vector 'psd' of size 2048 with the PSD values.
% 50 Hz (Dude) -> 50Hz/44100Hz * 2048 -> ~2 Lower Lim
% 300 Hz (Baby) -> 300Hz/44100Hz * 2048 -> ~14 Upper Lim
lower_lim = 2;
upper_lim = 14
for fund_cand = lower_lim:1:upper_lim
i_first_five_multiples = [1:1:5]*fund_cand;
sum_energy = sum(psd(i_first_five_multiples));
end
I would find the frequency which maximizes the sum_energy value.
It would be easier if you had some notion of the absolute values to expect, but I would suggest:
find the lowest (weakest) value first. It is your noise level.
compute the average level, it is your signal strength
define some function to decide the noise threshold. This is the tricky part, it may require some experimentation.
In a bad situation, signal may be only 2 or 3 times the noise level. If the signal is better you can probably use a threshold of 2xnoise.
Edit, after looking at the picture:
You should probably just start at the left and find a local maximum. Looks like you could use 30 dB threshold and a 10-bin window or something.
Finding the lowest peak won't work reliably for estimating pitch, as this frequency is sometimes completely missing, or down in the noise floor. For better reliability, try another algorithm: autocorrelation (AMDF, ASDF lag), cepstrum (FFT log FFT), harmonic product spectrum, state space density, and variations thereof that use neural nets, genetic algorithms or decision matrices to decide between alternative pitch hypothesis (RAPT, YAAPT, et.al.).
Added:
That said, you could guess a frequency, compute the average and standard deviation of spectral magnitudes for, say, a 2-to-1 frequency range around your guess, and see if there exists a peak significantly above the average (2 sigma?). Rinse and repeat for some number of frequency guesses, and see which one, or the lowest of several, has a peak that stands out the most from the average. Use that peak.

Categories

Resources