Raise real time event at audio sample rate - c#

I'm playing around doing some audio synthesis in real time with C# .Net.
I've got a VCO class that updates it's output waveform whenever the output waveform is read. In order to play a sound, I want to feed it into the DirectSound secondary buffer. I've played around doing this using a byte array not populated in real time.
However, in order to play my VCO in real time, I presume I need to read the output at the same rate as the sample rate specified for the direct sound object.
Is there a way I can have a timer or callback type function that raises an event at 1/sample rate so that the real time vco output can be matched to the direct sound sample rate ?
I suppose I can have a loop and interogate StopWatch.Ticks, but is there a neater way of having an event automatically raised, with no processor load in between ?

Related

Serial Communication Running Slow in Unity

I have connected IMU (Gyroscope&Acc&Magnetometer) to my Unity3D project and using serial communication via USB.
The problem is, when my script becomes little heavy with gaming codes, serial communication slows down - when values from IMU are supposed to change at certain point of time, they change after couple of seconds and as time goes, data stream cannot catch up with the game.
I am calling myPort.ReadLine() from Update function to read serial data from COM port.
What is the solution? - If I understand the problem right, I want the serial data reading not to be waiting to my app's next frame to receive new values.
May reducing Baud Rate of the IMU device work?
There are several things you can do for optimizing your project:
1) You can change the communication parameters. If you are sending pitch, yaw or roll, don't send them as string, instead use IEEE 754 format, send them as bytes. Also, if you are getting Quaternion values, do not loose time to convert them to Euler Angles.
2) Try to change your update rate in Unity from settings but be careful smaller update rate = slower unity frame rate.
3) Try to use high baud rate, such as 115200 or higher. If you reduce baud rate, your data from IMU can be corrupted.
4) In IMU code, you can toggle you serial port communication (you may not want to send all data continuously, you may send data shirkingly). I mean, you can downsample your data.
5) I don't know your data size but you can use Coroutines & Yield or Invoke. As you know, Unity does not allow us to use data received event but you can create your own event to understand if all of your data has been received.
(Also try discardInBuffer if you have problem with streaming)
6) You can write your own dll (c# dll does not make any bigger changes by the way so use C++). If you are using Windows, you can start searching with kernel32.Dll and ws2_32.dll.
So I figured the best solution for the problem - threading.
Since serial data has it's own speed and must not be depended on the game update frequency (otherwise it causes the serial stream to buffer and huge lagging accures), the following solution worked for me:
In the initialization function (init) I call invokeRepeating - timer that repeats itself every 0.01 seconds and in that timer, I call a thread for reading serial data.
Here is the pseudocode of the timer function:
void timerfunction ()
{
if (! threadAlive){
Start new thread - SerialThread ()
threadAlive = true;
}
}
void SerialThread ()
{
Read the Serial Data..
threadAlive = false:
}
This way, when there is a problem inside serial port, game won't lag because the thread takes the pressure on itself.
InvokeRepeating will make sure the threading (reading data) is attempted to be called every fixed (desirable) amount of time and therefore serial data will not buffer.
Worked just fine for me. Serial data speed was about 16Hz and gameplay framerate was - 10-100 fps.

How to timestamp two streams and synchronize them at receiver?

I have two streams of data, one with audio data,the other with video data.
The sound i record with DirectSound is put in a buffer of 100ms length and a DirectShow ISampleGrabbe witch grabs frame for me at 30 frames per second(one frame at each 33,33 ms).
What TimeStamping means? Should I attach to the video/audio a DateTime field and verify at receiving which audio packet have the same TimeStamp with the video frame?
I know this is a really hard subject, but can you please give me an idea?
It means each video/audio element has a time offset that says when it must be played in relation to when the video/audio was started. So the receiving end will order received elements by their timestamp and play them in order, also it will wait when video or audio elements are missing.
You should not add a DateTime attribute to every element. Instead the video/audio header should indicate at what framerate or frequency the media must be played and therefore how much elements it will receive every second. So a simple autonumber would do. It's the players responsibility to order the received elements and check if the point to where it has received all elements is far enough in the future that it can keep playing.

How to produce precisely-timed tone and silence?

I have a C# project that plays Morse code for RSS feeds. I write it using Managed DirectX, only to discover that Managed DirectX is old and deprecated. The task I have is to play pure sine wave bursts interspersed with silence periods (the code) which are precisely timed as to their duration. I need to be able to call a function which plays a pure tone for so many milliseconds, then Thread.Sleep() then play another, etc. At its fastest, the tones and spaces can be as short as 40ms.
It's working quite well in Managed DirectX. To get the precisely timed tone I create 1 sec. of sine wave into a secondary buffer, then to play a tone of a certain duration I seek forward to within x milliseconds of the end of the buffer then play.
I've tried System.Media.SoundPlayer. It's a loser [edit - see my answer below] because you have to Play(), Sleep(), then Stop() for arbitrary tone lengths. The result is a tone that is too long, variable by CPU load. It takes an indeterminate amount of time to actually stop the tone.
I then embarked on a lengthy attempt to use NAudio 1.3. I ended up with a memory resident stream providing the tone data, and again seeking forward leaving the desired length of tone remaining in the stream, then playing. This worked OK on the DirectSoundOut class for a while (see below) but the WaveOut class quickly dies with an internal assert saying that buffers are still on the queue despite PlayerStopped = true. This is odd since I play to the end then put a wait of the same duration between the end of the tone and the start of the next. You'd think that 80ms after starting Play of a 40 ms tone that it wouldn't have buffers on the queue.
DirectSoundOut works well for a while, but its problem is that for every tone burst Play() it spins off a separate thread. Eventually (5 min or so) it just stops working. You can see thread after thread after thread exiting in the Output window while running the project in VS2008 IDE. I don't create new objects during playing, I just Seek() the tone stream then call Play() over and over, so I don't think it's a problem with orphaned buffers/whatever piling up till it's choked.
I'm out of patience on this one, so I'm asking in the hopes that someone here has faced a similar requirement and can steer me in a direction with a likely solution.
I can't believe it... I went back to System.Media.SoundPlayer and got it to do just what I want... no giant dependency library with 95% unused code and/or quirks waiting to be discovered :-). Furthermore, it runs on MacOSX under Mono (2.6)!!! [wrong - no sound, will ask separate question]
I used a MemoryStream and BinaryWriter to crib a WAV file, complete with the RIFF header and chunking. No "fact" chunk needed, this is 16-bit samples at 44100Hz. So now I have a MemoryStream with 1000ms of samples in it, and wrapped by a BinaryReader.
In a RIFF file there are two 4-byte/32-bit lengths, the "overall" length which is 4 bytes into the stream (right after "RIFF" in ASCII), and a "data" length just before the sample data bytes. My strategy was to seek in the stream and use the BinaryWriter to alter the two lengths to fool the SoundPlayer into thinking the audio stream is just the length/duration I want, then Play() it. Next time, the duration is different, so once again overwrite the lengths in the MemoryStream with the BinaryWriter, Flush() it and once again call Play().
When I tried this, I couldn't get the SoundPlayer to see the changes to the stream, even if I set its Stream property. I was forced to create a new SoundPlayer... every 40 milliseconds??? No.
Well I want back to that code today and started looking at the SoundPlayer members. I saw "SoundLocation" and read it. There it said that a side effect of setting SoundLocation would be to null the Stream property, and vice versa for Stream. So I added a line of code to set the SOundLocation property to something bogus, "x", then set the Stream property to my (just modified) MemoryStream. Damn if it didn't pick that up and play a tone precisely as long as I asked for. There don't seem to be any crazy side effects like dead time afterward or increasing memory, or ??? It does take 1-2 milliseconds to do that tweaking of the WAV stream and then load/start the player, but it's very small and the price is right!
I also implemented a Frequency property which re-generates the samples and uses the Seek/BinaryWriter trick to overlay the old data in the RIFF/WAV MemoryStream with the same number of samples but for a different frequency, and again did the same thing for an Amplitude property.
This project is on SourceForge. You can get to the C# code for this hack in SPTones.CS from this page in the SVN browser. Thanks to everyone who provided info on this, including #arke whose thinking was close to mine. I do appreciate it.
It's best to just generate the sine waves and silence together into a buffer which you play. That is, always play something, but write whatever you need next into that buffer.
You know the samplerate, and given the samplerate, you can calculate the amount of samples you need to write.
uint numSamples = timeWantedInSeconds * sampleRate;
That's the amount of samples you need to generate a sine wave or silence, whichever. Then just fill the buffer as needed. That way, you get the most accurate possible timing.
Try using XNA.
You will have to provide a file, or a stream to a static tone, that you can loop. You can then change the pitch and volume of that tone.
Since XNA is made for games, it will have no problem at all with 40 ms delays.
It should be pretty easy to convert from ManagedDX to SlimDX ...
Edit: What stops you, btw, just pre-generating 'n' samples of sine wave? (Where n is the closest to the number of milliseconds you want). It really doesn't take all that long to generate the data. Further than that if you have a 22Khz buffer and you want the final 100 samples why don't you just submit 'buffer + 21950' and set the buffer length to 100 samples?

Recording/retrieving input events with irregular frequency

Just as a toy, I'm using the iTunes SDK and XNA to make my own quasi-GuitarHero game. The actual libraries aren't important, so I didn't tag them. This question is about a data structure.
Basically I want to start playing a song, and allow the user to play guitar to the song, recording in memory the Red/Yellow/Green/Blue/Orange key presses, as well as the strum, to play back later.
I've tried several different techniques, the most accurate being a bitwise int[] array where each element represents a 10ms time slot (and each bit of each int represents a physical key) once as an offset from the song start. This seems inefficient however, as I have to pigeonhole keypresses into these 10ms slots, not to mention a huge array size for a several minute song.
Any suggestions for a better way to implement this? My goal is to then serialize this data structure to disk for retrieval later. The overall goal of this project is to use this data to control LEDs in some fashion to a song, FWIW.
Thanks!
I would store the key up and key down events in a log format with a timestamp (relative to the start of the file) at the appropriate precision. You could use a List together with a custom class for storing details of the event type (which key, and up or down) and the timestamp.

Timing in C# real time audio analysis

I'm trying to determine the "beats per minute" from real-time audio in C#. It is not music that I'm detecting in though, just a constant tapping sound. My problem is determining the time between those taps so I can determine "taps per minute" I have tried using the WaveIn.cs class out there, but I don't really understand how its sampling. I'm not getting a set number of samples a second to analyze. I guess I really just don't know how to read in an exact number of samples a second to know the time between to samples.
Any help to get me in the right direction would be greatly appreciated.
I'm not sure which WaveIn.cs class you're using, but usually with code that records audio, you either A) tell the code to start recording, and then at some later point you tell the code to stop, and you get back an array (usually of type short[]) that comprises the data recorded during this time period; or B) tell the code to start recording with a given buffer size, and as each buffer is filled, the code makes a callback to a method you've defined with a reference to the filled buffer, and this process continues until you tell it to stop recording.
Let's assume that your recording format is 16 bits (aka 2 bytes) per sample, 44100 samples per second, and mono (1 channel). In the case of (A), let's say you start recording and then stop recording exactly 10 seconds later. You will end up with a short[] array that is 441,000 (44,100 x 10) elements in length. I don't know what algorithm you're using to detect "taps", but let's say that you detect taps in this array at element 0, element 22,050, element 44,100, element 66,150 etc. This means you're finding taps every .5 seconds (because 22,050 is half of 44,100 samples per second), which means you have 2 taps per second and thus 120 BPM.
In the case of (B) let's say you start recording with a fixed buffer size of 44,100 samples (aka 1 second). As each buffer comes in, you find taps at element 0 and at element 22,050. By the same logic as above, you'll calculate 120 BPM.
Hope this helps. With beat detection in general, it's best to record for a relatively long time and count the beats through a large array of data. Trying to estimate the "instantaneous" tempo is more difficult and prone to error, just like estimating the pitch of a recording is more difficult to do in realtime than with a recording of a full note.
I think you might be confusing samples with "taps."
A sample is a number representing the height of the sound wave at a given moment in time. A typical wave file might be sampled 44,100 times a second, so if you have two channels for stereo, you have 88,200 sixteen-bit numbers (samples) per second.
If you take all of these numbers and graph them, you will get something like this:
(source: vbaccelerator.com)
What you are looking for is this peak ------------^
That is the tap.
Assuming we're talking about the same WaveIn.cs, the constructor of WaveLib.WaveInRecorder takes a WaveLib.WaveFormat object as a parameter. This allows you to set the audio format, ie. samples rate, bit depth, etc. Just scan the audio samples for peaks or however you're detecting "taps" and record the average distance in samples between peaks.
Since you know the sample rate of the audio stream (eg. 44100 samples/second), take your average peak distance (in samples), multiply by 1/(samples rate) to get the time (in seconds) between taps, divide by 60 to get the time (in minutes) between taps, and invert to get the taps/minute.
Hope that helps

Categories

Resources