Get frequency from guitar input in real time - c#

I'm trying to get input from plug-in guitar, get the frequency from it and check whether the users is playing the right note or not. Something like a guitar tuner (I'll need to do a guitar tuner as well).
My first question is, how can I get the frequency of guitar input in real time?
and is it possible to do something like :
if (frequency == noteCFrequency)
{
//print This is a C note!!
}
I'm now able to get input from the soundcard, record and playback the input sound already.

For an implementation of FFT in C# you can have a look at this.
Whiel I think that you do not need to fully understand the FFT to use it, you should know about some basic limitations:
You always need a sample window. You may have a sliding window but the essence of being fast here is to take a chunk of signal and accept some error.
You have "buckets" of frequencies not exact ones. The result is something like "In the range 420Hz - 440Hz you have 30% of the signal". (The "width" of the buckets should be adjustable)
The window size must contain a number of samples that is a power of 2.
The window size must be at least two wavelengths of the longest wavelength you want to detect.
The highest frequency is given by the sampling rate. (You don't need to worry about this so much)
The more precise you want your frequencies separated, the longer shall your window be.

The other answers don't really explain how to do this, they just kinda waive their arms. For example, you would have no idea from reading those answers that the output of an FFT is a set of complex numbers, and you wouldn't have any clue how to interpret them.
Moreover FFT is not even the best available method, although for your purposes it works fine and most people find it to be the most intuitive. Anyway, this question has been asked to death, so I'll just refer you to other questions on SO. Not all apply to C#, but you are going to need to understand the (non-trivial) concepts first. You can find an answer to your question by reading answers to these questions and following the links.
frequency / pitch detection for dummies
Get the frequency of an audio file in every 1/4 seconds in android
How to detect sound frequency / pitch on an iPhone?
How to calculate sound frequency in android?
how to retrieve the different original frequency in each FFT caculating and without any frequency leakage in java

You must compute the FFT -Fast Fourier Transform- of a piece of signal and look for a peak. For kinds of FFT, window type, window size... you must read some documentation regarding signal processing. Anyway a 25 ms window is OK and use a Hamming window, for example. On the net there is lot of code for computing FFT. Good luck!

Related

Detect unstable trend (timeseries)

I'm looking for a way to detect faulty sensors in an IOT environment.
In this case a tank level sensor. The readings are always fluctuating somewhat, and the "hop" at the beginning is a tank refill which is "normal". On Sep 16 the sensor started to malfunction and just gives apparent random values after that.
As a programmer ideally I'd like a simple way of detecting the problem (and as soon after it starts as possible).
I can mess about with "if direction of vector between two hourly averages changes direction more than once per day it is unstable". But I guess there are more sound and stable algorithms out there.
Two simple options:
domain knowledge based: If you know the max possible output of the tank (say 5 liter/h), any output above that would signal an error. I.e. in case of the example, if
t1-t2 > 5
assuming t1 and t2 show the tank capacity at hourly intervall. You might want to add sensor accuracy related safety margin.
past data based: Assuming that all tanks are similar regarding output capacity and used sensor quality, calculate the following for all your data of non-faulty sensors:
max(t1-t2)
The result is the error threshold to be used, similar to the value 5 above.
Note: tank refill operation might require additional consideration.
Additional methods are described e.g. here. You can find other papers for sure.
http://bourbon.usc.edu/leana/pubs-full/sensorfaults.pdf
Standard deviation.
You're looking at how much variation there is between the measurements. Standard deviation is an easy formula, and well known. Look for a high value, and you know there's a problem.
You can also use coefficient of variation, which is the ratio of the mean to standard deviation.

C#/XNA - Playing a generated tone based on frequency

I'm writing a little app that's pretty much a sequencer (8 bit synths) I have a formula which converts a note to its corresponding frequency:
private float returnFrequency(Note note)
{
return (float)(440 * Math.Pow(TwoToTheTwelfthRoot, (note.SemitonesFromC0 - 57)));
}
Basically, what I'm trying to do is play a generated tone (sine, square, saw, etc) with this frequency, so it's audible through the speakers. Does XNA have any support for this? Or would I have to use an additional library?
I do not want to import 80+ samples of a sine wave at different frequencies through the Content Pipeline just so I could play tones with different frequencies.
For those of you who requested for the link, and for the future peoples who might need it:
http://www.david-gouveia.com/creating-a-basic-synth-in-xna-part-i/
He first goes through the dynamic sound instance, then goes to another level by showing you how to create voices (allowing a sort of 'play piano with your keyboard' type thing).
Funny thing is, David Gouveia has a StackExchange account, so I wouldn't be surprised if I get any notification from him at all, nor if some people recognized him.

What is good image processing algorithm for comparing differences in video frames?

I am looking for a good (simple, relatively fast) algorithm for comparing video frames and calculating the difference between the frames. I imagine a function like this:
//Same Scene
diff = ImageDiff(FrameInScene1, nextFrameInScene1);
//diff is low
//Scene Boundary
diff = ImageDiff(FrameInScene2, nextFrameInScene3);
//diff is high
Where diff is a numeric value of the similarity/difference between the frames. For example, two adjacent frames in the same scene would have low values, but a scene change would have very high values.
Note: I am not looking for a scene detection algorithm (some are timecode based), but this would be a good example of the problem.
A library with C# code would be ideal
Consecutive frames ? Mean Squared Error, Mean Absolute Error, PSNR.
Given so little information about your problem it doesn't make sense to suggest anything more.
I am not sure of C#! Have you used openCV? I wrote the code in C and I had used the BHATTACHARYA algorithm for comparing. You can use OpenCV from c# also look at : http://www.emgu.com/wiki/index.php/Main_Page.
All you would be doing is:
Grab the two frames.
Get the histograms of these in two separate pointer.
Pass these two pointers and use a normalization factor and compare the histograms.
I hope this helps.

How can I do real-time pitch detection in .Net?

I want to make a program that detects the note that is being played in front of the microphone. I am testing the FFT function of Naudio, but with the tests that I did in audacity it seems that FFT does not detect the pitch correctly. I played an C5, but the highest pick was at E7.
I changed the first dropdown box in the frequency analysis window to "enchanced autocorrelation" and after that the highest pick was at C5.
I googled "enchanced autocorrelation" and had no luck.
You are likely getting thrown off by harmonics. Have you tried testing with a sine wave to see if your NAudio's FFT is in the ballpark?
See these references:
http://cnx.org/content/m11714/latest/
http://www.gamedev.net/community/forums/topic.asp?topic_id=506592&whichpage=1&#3306838
Line 48 in Spectrum.cpp in the Audacity source code seems to be close to what you want. They also reference an IEEE paper by Tolonen and Karjalainen.
The highest peak in an audio spectrum is not necessarily the musical pitch as a human would perceive it, especially in a sound with strong overtones. That's because pitch is a human psycho-perceptual phenomena, the brain will often deduce frequencies that aren't even present in a waveform.
Auto-correlation methods of frequency or pitch estimation (roughly, finding how far apart even a funny-looking and/or non-sinusoidal waveform repeats in time) is usually a better match for what a human would call pitch. The reason for various enhancements to the autocorrelation algorithm is that simple autocorrelation will find an near infinite number of repeating wavelengths (e.g. if it repeats every 1 second it also repeats twice every 2 seconds, etc.) So the trick is to weight the correlation to somehow statistically better match what a human would guess about the same waveform.
Well, if you can live with GPLv2, why not take a peek at the Audacity source code?
http://audacity.sourceforge.net/download/beta_source

Timing in C# real time audio analysis

I'm trying to determine the "beats per minute" from real-time audio in C#. It is not music that I'm detecting in though, just a constant tapping sound. My problem is determining the time between those taps so I can determine "taps per minute" I have tried using the WaveIn.cs class out there, but I don't really understand how its sampling. I'm not getting a set number of samples a second to analyze. I guess I really just don't know how to read in an exact number of samples a second to know the time between to samples.
Any help to get me in the right direction would be greatly appreciated.
I'm not sure which WaveIn.cs class you're using, but usually with code that records audio, you either A) tell the code to start recording, and then at some later point you tell the code to stop, and you get back an array (usually of type short[]) that comprises the data recorded during this time period; or B) tell the code to start recording with a given buffer size, and as each buffer is filled, the code makes a callback to a method you've defined with a reference to the filled buffer, and this process continues until you tell it to stop recording.
Let's assume that your recording format is 16 bits (aka 2 bytes) per sample, 44100 samples per second, and mono (1 channel). In the case of (A), let's say you start recording and then stop recording exactly 10 seconds later. You will end up with a short[] array that is 441,000 (44,100 x 10) elements in length. I don't know what algorithm you're using to detect "taps", but let's say that you detect taps in this array at element 0, element 22,050, element 44,100, element 66,150 etc. This means you're finding taps every .5 seconds (because 22,050 is half of 44,100 samples per second), which means you have 2 taps per second and thus 120 BPM.
In the case of (B) let's say you start recording with a fixed buffer size of 44,100 samples (aka 1 second). As each buffer comes in, you find taps at element 0 and at element 22,050. By the same logic as above, you'll calculate 120 BPM.
Hope this helps. With beat detection in general, it's best to record for a relatively long time and count the beats through a large array of data. Trying to estimate the "instantaneous" tempo is more difficult and prone to error, just like estimating the pitch of a recording is more difficult to do in realtime than with a recording of a full note.
I think you might be confusing samples with "taps."
A sample is a number representing the height of the sound wave at a given moment in time. A typical wave file might be sampled 44,100 times a second, so if you have two channels for stereo, you have 88,200 sixteen-bit numbers (samples) per second.
If you take all of these numbers and graph them, you will get something like this:
(source: vbaccelerator.com)
What you are looking for is this peak ------------^
That is the tap.
Assuming we're talking about the same WaveIn.cs, the constructor of WaveLib.WaveInRecorder takes a WaveLib.WaveFormat object as a parameter. This allows you to set the audio format, ie. samples rate, bit depth, etc. Just scan the audio samples for peaks or however you're detecting "taps" and record the average distance in samples between peaks.
Since you know the sample rate of the audio stream (eg. 44100 samples/second), take your average peak distance (in samples), multiply by 1/(samples rate) to get the time (in seconds) between taps, divide by 60 to get the time (in minutes) between taps, and invert to get the taps/minute.
Hope that helps

Categories

Resources