I'm trying to play back a sound at a given pitch using (C#) XNA audio framework's SoundEffect class. So far, I have this (very basic)
public void playSynth(SoundEffect se, int midiNote) {
float pitch = (float)functions.getMIDIFreq(midiNote) / ((float)functions.getMIDIFreq(127)/2);
pitch-=1F;
Debug.WriteLine("Note: {0}. Float: {1}",midiNote,pitch);
synth = se.CreateInstance();
synth.IsLooped = false;
synth.Pitch = pitch;
synth.Play();
}
Currently, the pitch played back is very off-key, because the math is wrong. The way this function works is I'm sending a MIDI note (0 through 127) to the function, using a function I made called getMIDIFreq to convert that note to a frequency - which works fine.
To call this function, I'm using this:
SoundEffect sound = SoundEffect.FromStream(TitleContainer.OpenStream(#"synth.wav"));
playSynth(sound,(int)midiNote); //where midiNote is 0...127 number
where synth.wav is a simple C note I created in a DAW and exported. The whole point of this program is to play back the MIDI note given in that synth sound, but I'd gladly settle for a sine wave, or anything really. I can't use Console.Beep because it's extremely slow and not for playing entire songs with notes in rapid succession.
So my question is, how could I fix this code so it plays the sample at the right pitch? I realize I only have 2 octaves to work with here, so if there's a solution that involves generating a tone at a given frequency and is very fast, that would be even better.
Thanks!
EDIT: I'm making this a WinForms application, not an XNA game, but I have the framework downloaded and referenced in my project.
You can't apply an arbitrary frequency. You can only lower pitch by an octave (half frequency) or raise it by an octave (double frequency). So, to calculate the pitch bend value, you first need to know the initial pitch of the sample.
Suppose your sample is 440 Hz A, and you want an A an octave down (220 Hz). The value you need is -1. yourPitch / initiPitch = 0.5 to 2.0. You will need to make that into the scale of -1 to +1. I can't tell you exactly how, because the documentation isn't clear if the scale is logarithmic or not. You would have to experiment, but this should get you started.
Related
I am currently working on a project in C# where i play around with planetary gravitation, which i know is a hardcore topic to graps to it's fullest but i like challenges. I've been reading up on Newtons laws and Keplers Laws, but one thing i cannot figure out is how to get the correct gravitational direction.
In my example i only have 2 bodies. A Satellite and a Planet. This is to make is simplify it, so i can grasp it - but my plan is to have multiple objects that dynamically effect each other, and hopefully end up with a somewhat realistic multi-body system.
When you have an orbit, then the satellite has a gravitational force, and that is ofcourse in the direction of the planet, but that direction isn't a constant. To explain my problem better i'll try using an example:
let's say we have a satellite moving at a speed of 50 m/s and accelerates towards the planet at a speed of 10 m/s/s, in a radius of 100 m. (all theoretical numbers) If we then say that the framerate is at 1, then after one second the object will be 50 units forward and 10 units down.
As the satellite moves multiple units in a frame and about 50% of the radius, the gravitational direcion have shifted alot, during this frame, but the applied force have only been "downwards". this creates a big margin of error, especially if the object is moving a big percentage of the radius.
In our example we'd probably needed our graviational direction to be based upon the average between our current position and the position at the end of this frame.
How would one go about calculating this?
I have a basis understanding of trigonometry, but mainly with focus on triangles. Assume i am stupid, because compared to any of you, i probably am.
(I made a previous question but ended up deleting it as it created some hostility and was basicly not that well phrased, and was ALL to general - it wasn't really a specific question. i hope this is better. if not, then please inform me, i am here to learn :) )
Just for reference, this is the function i have right now for movement:
foreach (ExtTerBody OtherObject in UniverseController.CurrentUniverse.ExterTerBodies.Where(x => x != this))
{
double massOther = OtherObject.Mass;
double R = Vector2Math.Distance(Position, OtherObject.Position);
double V = (massOther) / Math.Pow(R,2) * UniverseController.DeltaTime;
Vector2 NonNormTwo = (OtherObject.Position - Position).Normalized() * V;
Vector2 NonNormDir = Velocity + NonNormTwo;
Velocity = NonNormDir;
Position += Velocity * Time.DeltaTime;
}
If i have phrased myself badly, please ask me to rephrase parts - English isn't my native language, and specific subjects can be hard to phrase, when you don't know the correct technical terms. :)
I have a hunch that this is covered in keplers second law, but if it is, then i'm not sure how to use it, as i don't understand his laws to the fullest.
Thank you for your time - it means alot!
(also if anyone see multi mistakes in my function, then please point them out!)
I am currently working on a project in C# where i play around with planetary gravitation
This is a fun way to learn simulation techniques, programming and physics at the same time.
One thing I cannot figure out is how to get the correct gravitational direction.
I assume that you are not trying to simulate relativistic gravitation. The Earth isn't in orbit around the Sun, the Earth is in orbit around where the sun was eight minutes ago. Correcting for the fact that gravitation is not instantaneous can be difficult. (UPDATE: According to commentary this is incorrect. What do I know; I stopped taking physics after second year Newtonian dynamics and have only the vaguest understanding of tensor calculus.)
You'll do best at this early stage to assume that the gravitational force is instantaneous and that planets are points with all their mass at the center. The gravitational force vector is a straight line from one point to another.
Let's say we have a satellite moving at a speed of 50 m/s ... If we then say that the framerate is one frame per second then after one second the object will be 50 units right and 10 units down.
Let's make that more clear. Force is equal to mass times acceleration. You work out the force between the bodies. You know their masses, so you now know the acceleration of each body. Each body has a position and a velocity. The acceleration changes the velocity. The velocity changes the position. So if the particle starts off having a velocity of 50 m/s to the left and 0 m/s down, and then you apply a force that accelerates it by 10 m/s/s down, then we can work out the change to the velocity, and then the change to the position. As you note, at the end of that second the position and the velocity will have both changed by a huge amount compared to their existing magnitudes.
As the satellite moves multiple units in a frame and about 50% of the radius, the gravitational direcion have shifted alot, during this frame, but the applied force have only been "downwards". this creates a big margin of error, especially if the object is moving a big percentage of the radius.
Correct. The problem is that the frame rate is enormously too low to correctly model the interaction you're describing. You need to be running the simulation so that you're looking at tenths, hundredths or thousanths of seconds if the objects are changing direction that rapidly. The size of the time step is usually called the "delta t" of the simulation, and yours is way too large.
For planetary bodies, what you're doing now is like trying to model the earth by simulating its position every few months and assuming it moves in a straight line in the meanwhile. You need to actually simulate its position every few minutes, not every few months.
In our example we'd probably needed our graviational direction to be based upon the average between our current position and the position at the end of this frame.
You could do that but it would be easier to simply decrease the "delta t" for the computation. Then the difference between the directions at the beginning and the end of the frame is much smaller.
Once you've got that sorted out then there are more techniques you can use. For example, you could detect when the position changes too much between frames and go back and redo the computations with a smaller time step. If the positions change hardly at all then increase the time step.
Once you've got that sorted, there are lots of more advanced techniques you can use in physics simulations, but I would start by getting basic time stepping really solid first. The more advanced techniques are essentially variations on your idea of "do a smarter interpolation of the change over the time step" -- you are on the right track here, but you should walk before you run.
I'll start with a technique that is almost as simple as the Euler-Cromer integration you've been using but is markedly more accurate. This is the leapfrog technique. The idea is very simple: position and velocity are kept at half time steps from one another.
The initial state has position and velocity at time t0. To get that half step offset, you'll need a special case for the very first step, where velocity is advanced half a time step using the acceleration at the start of the interval and then position is advanced by a full step. After this first time special case, the code works just like your Euler-Cromer integrator.
In pseudo code, the algorithm looks like
void calculate_accel (orbiting_body_collection, central_body) {
foreach (orbiting_body : orbiting_body_collection) {
delta_pos = central_body.pos - orbiting_body.pos;
orbiting_body.acc =
(central_body.mu / pow(delta_pos.magnitude(),3)) * delta_pos;
}
}
void leapfrog_step (orbiting_body_collection, central_body, delta_t) {
static bool initialized = false;
calculate_accel (orbiting_body_collection, central_body);
if (! initialized) {
initialized = true;
foreach orbiting_body {
orbiting_body.vel += orbiting_body.acc*delta_t/2.0;
orbiting_body.pos += orbiting_body.vel*delta_t;
}
}
else {
foreach orbiting_body {
orbiting_body.vel += orbiting_body.acc*delta_t;
orbiting_body.pos += orbiting_body.vel*delta_t;
}
}
}
Note that I've added acceleration as a field of each orbiting body. This was a temporary step to keep the algorithm similar to yours. Note also that I moved the calculation of acceleration to it's own separate function. That is not a temporary step. It is the first essential step to advancing to even more advanced integration techniques.
The next essential step is to undo that temporary addition of the acceleration. The accelerations properly belong to the integrator, not the body. On the other hand, the calculation of accelerations belongs to the problem space, not the integrator. You might want to add relativistic corrections, or solar radiation pressure, or planet to planet gravitational interactions. The integrator should be unaware of what goes into those accelerations are calculated. The function calculate_accels is a black box called by the integrator.
Different integrators have very different concepts of when accelerations need to be calculated. Some store a history of recent accelerations, some need an additional workspace to compute an average acceleration of some sort. Some do the same with velocities (keep a history, have some velocity workspace). Some more advanced integration techniques use a number of techniques internally, switching from one to another to provide the best balance between accuracy and CPU usage. If you want to simulate the solar system, you need an extremely accurate integrator. (And you need to move far, far away from floats. Even doubles aren't good enough for a high precision solar system integration. With floats, there's not much point going past RK4, and maybe not even leapfrog.)
Properly separating what belongs to whom (the integrator versus the problem space) makes it possible to refine the problem domain (add relativity, etc.) and makes it possible to easily switch integration techniques so you can evaluate one technique versus another.
So i found a solution, it might not be the smartest, but it works, and it's pretty came to mind after reading both Eric's answer and also reading the comment made by marcus, you could say that it's a combination of the two:
This is the new code:
foreach (ExtTerBody OtherObject in UniverseController.CurrentUniverse.ExterTerBodies.Where(x => x != this))
{
double massOther = OtherObject.Mass;
double R = Vector2Math.Distance(Position, OtherObject.Position);
double V = (massOther) / Math.Pow(R,2) * Time.DeltaTime;
float VRmod = (float)Math.Round(V/(R*0.001), 0, MidpointRounding.AwayFromZero);
if(V > R*0.01f)
{
for (int x = 0; x < VRmod; x++)
{
EulerMovement(OtherObject, Time.DeltaTime / VRmod);
}
}
else
EulerMovement(OtherObject, Time.DeltaTime);
}
public void EulerMovement(ExtTerBody OtherObject, float deltaTime)
{
double massOther = OtherObject.Mass;
double R = Vector2Math.Distance(Position, OtherObject.Position);
double V = (massOther) / Math.Pow(R, 2) * deltaTime;
Vector2 NonNormTwo = (OtherObject.Position - Position).Normalized() * V;
Vector2 NonNormDir = Velocity + NonNormTwo;
Velocity = NonNormDir;
//Debug.WriteLine("Velocity=" + Velocity);
Position += Velocity * deltaTime;
}
To explain it:
I came to the conclusion that if the problem was that the satellite had too much velocity in one frame, then why not seperate it into multiple frames? So this is what "it" does now.
When the velocity of the satellite is more than 1% of the current radius, it seperates the calculation into multiple bites, making it more precise.. This will ofcourse lower the framerate when working with high velocities, but it's okay with a project like this.
Different solutions are still very welcome. I might tweak the trigger-amounts, but the most important thing is that it works, then i can worry about making it more smooth!
Thank's everybody that took a look, and everyone who helped be find the conclusion myself! :) It's awesome that people can help like this!
I'm writing a little app that's pretty much a sequencer (8 bit synths) I have a formula which converts a note to its corresponding frequency:
private float returnFrequency(Note note)
{
return (float)(440 * Math.Pow(TwoToTheTwelfthRoot, (note.SemitonesFromC0 - 57)));
}
Basically, what I'm trying to do is play a generated tone (sine, square, saw, etc) with this frequency, so it's audible through the speakers. Does XNA have any support for this? Or would I have to use an additional library?
I do not want to import 80+ samples of a sine wave at different frequencies through the Content Pipeline just so I could play tones with different frequencies.
For those of you who requested for the link, and for the future peoples who might need it:
http://www.david-gouveia.com/creating-a-basic-synth-in-xna-part-i/
He first goes through the dynamic sound instance, then goes to another level by showing you how to create voices (allowing a sort of 'play piano with your keyboard' type thing).
Funny thing is, David Gouveia has a StackExchange account, so I wouldn't be surprised if I get any notification from him at all, nor if some people recognized him.
I want to make a program that detects the note that is being played in front of the microphone. I am testing the FFT function of Naudio, but with the tests that I did in audacity it seems that FFT does not detect the pitch correctly. I played an C5, but the highest pick was at E7.
I changed the first dropdown box in the frequency analysis window to "enchanced autocorrelation" and after that the highest pick was at C5.
I googled "enchanced autocorrelation" and had no luck.
You are likely getting thrown off by harmonics. Have you tried testing with a sine wave to see if your NAudio's FFT is in the ballpark?
See these references:
http://cnx.org/content/m11714/latest/
http://www.gamedev.net/community/forums/topic.asp?topic_id=506592&whichpage=1�
Line 48 in Spectrum.cpp in the Audacity source code seems to be close to what you want. They also reference an IEEE paper by Tolonen and Karjalainen.
The highest peak in an audio spectrum is not necessarily the musical pitch as a human would perceive it, especially in a sound with strong overtones. That's because pitch is a human psycho-perceptual phenomena, the brain will often deduce frequencies that aren't even present in a waveform.
Auto-correlation methods of frequency or pitch estimation (roughly, finding how far apart even a funny-looking and/or non-sinusoidal waveform repeats in time) is usually a better match for what a human would call pitch. The reason for various enhancements to the autocorrelation algorithm is that simple autocorrelation will find an near infinite number of repeating wavelengths (e.g. if it repeats every 1 second it also repeats twice every 2 seconds, etc.) So the trick is to weight the correlation to somehow statistically better match what a human would guess about the same waveform.
Well, if you can live with GPLv2, why not take a peek at the Audacity source code?
http://audacity.sourceforge.net/download/beta_source
I am using XNA to build a project where I can draw "graffiti" on my wall using an LCD projector and a monochrome camera that is filtered to see only hand held laser dot pointers. I want to use any number of laser pointers -- don't really care about differentiating them at this point.
The wall is 10' x 10', and the camera is only 640x480 so I'm attempting to use sub-pixel measurement using a spline curve as outlined here: tpub.com
The camera runs at 120fps (8-bit), so my question to you all is the fastest way to to find that subpixel laser dot center. Currently I'm using a brute force 2D search to find the brightest pixel on the image (0 - 254) before doing the spline interpolation. That method is not very fast and each frame takes longer to computer than they are coming in.
Edit: To clarify, in the end my camera data is represented by a 2D array of bytes indicating pixel brightness.
What I'd like to do is use an XNA shader to crunch the image for me. Is that practical? From what I understand, there really isn't a way to keep persistent variables in a Pixel Shader such as running totals, averages, etc.
But for arguments sake, let's say I found the brightest pixels using brute force, then stored them and their neighboring pixels for the spline curve into X number of vertices using texcoords. Is is practical then to use HLSL to compute a spline curve using texcoords?
I am also open to suggestions outside of my XNA box, be it DX10/DX11, maybe some sort of FPGA, etc. I just don't really have much experience with ways of crunching data in this way. I figure if they can do something like this on a Wii-Mote using 2 AA batteries than I'm probably going about this the wrong way.
Any ideas?
If by Brute-forcing you mean looking at every pixel independently, it is basically the only way of doing it. You will have to scan through all the images pixels, no matter what you want to do with the image. Althought you might not need to find the brightest pixels, you can filter the image by color (ex.: if your using a red laser). This is easily done using a HSV color coded image. If you are looking for some faster algorithms, try OpenCV. It's been optimized again and again for image treatment, and you can use it in C# via a wrapper:
[http://www.codeproject.com/KB/cs/Intel_OpenCV.aspx][1]
OpenCV can also help you easily find the point centers and track each points.
Is there a reason you are using a 120fps camera? you know the human eye can only see about 30fps right? I'm guessing it's to follow very fast laser movements... You might want to consider bringning it down, because real-time processing of 120fps will be very hard to acheive.
running through 640*480 bytes to find the highest byte should run within a ms. Even on slow processors. No need to take the route of shaders.
I would advice to optimize your loop.
for instance: this is really slow (because it does a multiplication with every array lookup):
byte highest=0;
foundX=-1, foundY=-1;
for(y=0; y<480; y++)
{
for(x=0; x<640; x++)
{
if(myBytes[x][y] > highest)
{
highest = myBytes[x][y];
foundX = x;
foundY = y;
}
}
}
this is much faster:
byte [] myBytes = new byte[640*480];
//fill it with your image
byte highest=0;
int found=-1, foundX=-1, foundY=-1;
int len = 640*480;
for(i=0; i<len; i++)
{
if(myBytes[i] > highest)
{
highest = myBytes[i];
found = i;
}
}
if(found!=-1)
{
foundX = i%640;
foundY = i/640;
}
This is off the top of my head so sorry for errors ;^)
You're dealing with some pretty complex maths if you want sub-pixel accuracy. I think this paper is something to consider. Unfortunately, you'll have to pay to see it using that site. If you've got access to a suitable library, they may be able to get hold of it for you.
The link in the original post suggested doing 1000 spline calculations for each axis - it treated x and y independantly, which is OK for circular images but is a bit off if the image is a skewed ellipse. You could use the following to get a reasonable estimate:
xc = sum (xn.f(xn)) / sum (f(xn))
where xc is the mean, xn is the a point along the x-axis and f(xn) is the value at the point xn. So for this:
*
* *
* *
* *
* *
* *
* * *
* * * *
* * * *
* * * * * *
------------------
2 3 4 5 6 7
gives:
sum (xn.f(xn)) = 1 * 2 + 3 * 3 + 4 * 9 + 5 * 10 + 6 * 4 + 7 * 1
sum (f(xn)) = 1 + 3 + 9 + 10 + 4 + 1
xc = 128 / 28 = 4.57
and repeat for the y-axis.
Brute-force is the only real way, however your idea of using a shader is good - you'd be offloading the brute-force check from the CPU, which can only look at a small number of pixels simultaneously (roughly 1 per core), to the GPU, which likely has 100+ dumb cores (pipelines) that can simultaneously compare pixels (your algorithm may need to be modified a bit to work well with the 1 instruction-many cores arrangement of a GPU).
The biggest issue I see is whether or not you can move that data to the GPU fast enough.
Another optimization to consider: if you're drawing, then the current location of the pointer is probably close the last location of the pointer. Remember the last recorded position of the pointer between frames, and only scan a region close to that position... say a 1'x1' area. Only if the pointer isn't found in that area should you scan the whole surface.
Obviously, there will be a tradeoff between how quickly your program can scan, and how quickly you'll be able to move your mouse before the camera "loses" the pointer and has to go to the slow, full-image scan. A little experimentation will probably reveal the optimum value.
Cool project, by the way.
Put the camera slightly out of focus and bitblt against a neutral sample. You can quickly scan rows for non 0 values. Also if you are at 8 bits and pick up 4 bytes at a time you can process the image faster. As other pointed out you might reduce the frame rate. If you have less fidelity than the resulting image there isn't much point in the high scan rate.
(The slight out of focus camera will help get just the brightest points and reduce false positives if you have a busy surface... of course assuming you are not shooting a smooth/flat surface)
Start with a black output buffer. Forget about subpixel for now. Every frame, every pixel, do this:
outbuff=max(outbuff,inbuff);
Do subpixel filtering to a third "clean" buffer when you're done with the image. Or do a chunk or a line of the screen at a time in real time. Advantage: real-time "rough" view of the drawing, cleaned up as you go.
When you convert from the rough output buffer to the "clean" third buffer, you can clear the rough to black. This lets you keep drawing forever without slowing down.
By drawing the "clean" over top the "rough," maybe in a slightly different color, you'll have the best of both worlds.
This is similar to what paint programs do--if you draw really fast, you see a rough version, then the paint program "cleans up" the image when it has time.
Some comments on the algorithm:
I've seen a lot of cheats in this arena. I've played Sonic on a Sega Genesis emulator that upsamples. and it has some pretty wild algorithms that work very well and are very fast.
You actually have some advantages you can gain because you might know the brightness and the radius on the dot.
You might just look at each pixel and its 8 neighbors and let those 9 pixels "vote" according to their brightness for where the subpixel lies.
Other thoughts
Your hand is not that accurate when you control a laser pointer. Try getting all the dots every 10 frames or so, identifying which beams are which (based on previous motion, and accounting for new dots, turned-off lasers, and dots that have entered or left the visual field), then just drawing a high resolution curve. Don't worry about sub pixel in the input--just draw the curve into the high res output.
Use a Catmull-Rom spline, which goes through all control points.
I'm trying to determine the "beats per minute" from real-time audio in C#. It is not music that I'm detecting in though, just a constant tapping sound. My problem is determining the time between those taps so I can determine "taps per minute" I have tried using the WaveIn.cs class out there, but I don't really understand how its sampling. I'm not getting a set number of samples a second to analyze. I guess I really just don't know how to read in an exact number of samples a second to know the time between to samples.
Any help to get me in the right direction would be greatly appreciated.
I'm not sure which WaveIn.cs class you're using, but usually with code that records audio, you either A) tell the code to start recording, and then at some later point you tell the code to stop, and you get back an array (usually of type short[]) that comprises the data recorded during this time period; or B) tell the code to start recording with a given buffer size, and as each buffer is filled, the code makes a callback to a method you've defined with a reference to the filled buffer, and this process continues until you tell it to stop recording.
Let's assume that your recording format is 16 bits (aka 2 bytes) per sample, 44100 samples per second, and mono (1 channel). In the case of (A), let's say you start recording and then stop recording exactly 10 seconds later. You will end up with a short[] array that is 441,000 (44,100 x 10) elements in length. I don't know what algorithm you're using to detect "taps", but let's say that you detect taps in this array at element 0, element 22,050, element 44,100, element 66,150 etc. This means you're finding taps every .5 seconds (because 22,050 is half of 44,100 samples per second), which means you have 2 taps per second and thus 120 BPM.
In the case of (B) let's say you start recording with a fixed buffer size of 44,100 samples (aka 1 second). As each buffer comes in, you find taps at element 0 and at element 22,050. By the same logic as above, you'll calculate 120 BPM.
Hope this helps. With beat detection in general, it's best to record for a relatively long time and count the beats through a large array of data. Trying to estimate the "instantaneous" tempo is more difficult and prone to error, just like estimating the pitch of a recording is more difficult to do in realtime than with a recording of a full note.
I think you might be confusing samples with "taps."
A sample is a number representing the height of the sound wave at a given moment in time. A typical wave file might be sampled 44,100 times a second, so if you have two channels for stereo, you have 88,200 sixteen-bit numbers (samples) per second.
If you take all of these numbers and graph them, you will get something like this:
(source: vbaccelerator.com)
What you are looking for is this peak ------------^
That is the tap.
Assuming we're talking about the same WaveIn.cs, the constructor of WaveLib.WaveInRecorder takes a WaveLib.WaveFormat object as a parameter. This allows you to set the audio format, ie. samples rate, bit depth, etc. Just scan the audio samples for peaks or however you're detecting "taps" and record the average distance in samples between peaks.
Since you know the sample rate of the audio stream (eg. 44100 samples/second), take your average peak distance (in samples), multiply by 1/(samples rate) to get the time (in seconds) between taps, divide by 60 to get the time (in minutes) between taps, and invert to get the taps/minute.
Hope that helps