I have a C# project that plays Morse code for RSS feeds. I write it using Managed DirectX, only to discover that Managed DirectX is old and deprecated. The task I have is to play pure sine wave bursts interspersed with silence periods (the code) which are precisely timed as to their duration. I need to be able to call a function which plays a pure tone for so many milliseconds, then Thread.Sleep() then play another, etc. At its fastest, the tones and spaces can be as short as 40ms.
It's working quite well in Managed DirectX. To get the precisely timed tone I create 1 sec. of sine wave into a secondary buffer, then to play a tone of a certain duration I seek forward to within x milliseconds of the end of the buffer then play.
I've tried System.Media.SoundPlayer. It's a loser [edit - see my answer below] because you have to Play(), Sleep(), then Stop() for arbitrary tone lengths. The result is a tone that is too long, variable by CPU load. It takes an indeterminate amount of time to actually stop the tone.
I then embarked on a lengthy attempt to use NAudio 1.3. I ended up with a memory resident stream providing the tone data, and again seeking forward leaving the desired length of tone remaining in the stream, then playing. This worked OK on the DirectSoundOut class for a while (see below) but the WaveOut class quickly dies with an internal assert saying that buffers are still on the queue despite PlayerStopped = true. This is odd since I play to the end then put a wait of the same duration between the end of the tone and the start of the next. You'd think that 80ms after starting Play of a 40 ms tone that it wouldn't have buffers on the queue.
DirectSoundOut works well for a while, but its problem is that for every tone burst Play() it spins off a separate thread. Eventually (5 min or so) it just stops working. You can see thread after thread after thread exiting in the Output window while running the project in VS2008 IDE. I don't create new objects during playing, I just Seek() the tone stream then call Play() over and over, so I don't think it's a problem with orphaned buffers/whatever piling up till it's choked.
I'm out of patience on this one, so I'm asking in the hopes that someone here has faced a similar requirement and can steer me in a direction with a likely solution.
I can't believe it... I went back to System.Media.SoundPlayer and got it to do just what I want... no giant dependency library with 95% unused code and/or quirks waiting to be discovered :-). Furthermore, it runs on MacOSX under Mono (2.6)!!! [wrong - no sound, will ask separate question]
I used a MemoryStream and BinaryWriter to crib a WAV file, complete with the RIFF header and chunking. No "fact" chunk needed, this is 16-bit samples at 44100Hz. So now I have a MemoryStream with 1000ms of samples in it, and wrapped by a BinaryReader.
In a RIFF file there are two 4-byte/32-bit lengths, the "overall" length which is 4 bytes into the stream (right after "RIFF" in ASCII), and a "data" length just before the sample data bytes. My strategy was to seek in the stream and use the BinaryWriter to alter the two lengths to fool the SoundPlayer into thinking the audio stream is just the length/duration I want, then Play() it. Next time, the duration is different, so once again overwrite the lengths in the MemoryStream with the BinaryWriter, Flush() it and once again call Play().
When I tried this, I couldn't get the SoundPlayer to see the changes to the stream, even if I set its Stream property. I was forced to create a new SoundPlayer... every 40 milliseconds??? No.
Well I want back to that code today and started looking at the SoundPlayer members. I saw "SoundLocation" and read it. There it said that a side effect of setting SoundLocation would be to null the Stream property, and vice versa for Stream. So I added a line of code to set the SOundLocation property to something bogus, "x", then set the Stream property to my (just modified) MemoryStream. Damn if it didn't pick that up and play a tone precisely as long as I asked for. There don't seem to be any crazy side effects like dead time afterward or increasing memory, or ??? It does take 1-2 milliseconds to do that tweaking of the WAV stream and then load/start the player, but it's very small and the price is right!
I also implemented a Frequency property which re-generates the samples and uses the Seek/BinaryWriter trick to overlay the old data in the RIFF/WAV MemoryStream with the same number of samples but for a different frequency, and again did the same thing for an Amplitude property.
This project is on SourceForge. You can get to the C# code for this hack in SPTones.CS from this page in the SVN browser. Thanks to everyone who provided info on this, including #arke whose thinking was close to mine. I do appreciate it.
It's best to just generate the sine waves and silence together into a buffer which you play. That is, always play something, but write whatever you need next into that buffer.
You know the samplerate, and given the samplerate, you can calculate the amount of samples you need to write.
uint numSamples = timeWantedInSeconds * sampleRate;
That's the amount of samples you need to generate a sine wave or silence, whichever. Then just fill the buffer as needed. That way, you get the most accurate possible timing.
Try using XNA.
You will have to provide a file, or a stream to a static tone, that you can loop. You can then change the pitch and volume of that tone.
Since XNA is made for games, it will have no problem at all with 40 ms delays.
It should be pretty easy to convert from ManagedDX to SlimDX ...
Edit: What stops you, btw, just pre-generating 'n' samples of sine wave? (Where n is the closest to the number of milliseconds you want). It really doesn't take all that long to generate the data. Further than that if you have a 22Khz buffer and you want the final 100 samples why don't you just submit 'buffer + 21950' and set the buffer length to 100 samples?
Related
I have a project where audio is recorded at several sources, and the goal is to process X (user-defined) seconds of it through various methods (DSP as well as speech-to-text).
I'm using a MixingSampleProvider to collect all the sources into one "provider." I'm passing this to a NotifyingSampleProvider because it raises an event at each sample, and then passing that sample to my class that does the processing. I'm adding the float the NotifyingSampleProvider produces to the end of my "X second window" array (using Array.Copy to create a temp array with all but that last value, adding that last value, and copying the temp array back to the original) which I use for processing.
The obvious problem here is that it notifies (and that I'm locking and adding to the "X second window" array) for every single sample, or 44100 times a second. This leads to the audio being pretty much constantly locked so it can't be processed. There's got to be a more performant way to deal with this.
The one idea I had was a BufferedWaveProvider that doesn't get read from anywhere else, so it's always full (with DiscardOnOverflow = true of course). However A) this still requires a NotifyingSampleProvider to add to it periodically (you can't pass a provider to the BufferedWaveProvider for it to automatically read from) and I'd like to get away from such frequent (44100 Hz) function calls, and B) I understand that there's a limit to the BufferDuration length, which might be too small of a window (I don't recall what the limit is and I can't find anything online saying what it is).
How might I solve this problem? Using NAudio, how to I keep the last X seconds of audio accessible to a separate class at any time without using the NotifyingSampleProvider?
I am attempting to propagate a single sound source to multiple outputs (such as one microphone input to multiple sound cards or channels). The output does not have to be sync'd (a few ms delay is acceptable) but it would be nice if it could be sync'd.
I have successfully written code that loops a microphone input to an output using a WaveIn, a BufferedWaveProvider, and a WaveOut. However when I try to read one BufferedWaveProvider with two instances of WaveOut, the two outputs create this odd 'interleaved' choppy sound. Here is a code snippet for the output portion;
private void CreateWaveOutDevice()
{
waveProvider = new BufferedWaveProvider(waveIn.WaveFormat);
waveOut = new WaveOut();
waveOut.DeviceNumber = 0; //Sound card 1
waveOut.DesiredLatency = 100;
waveOut.Init(waveProvider);
waveOut.PlaybackStopped += wavePlayer_PlaybackStopped;
waveOut2 = new WaveOut();
waveOut2.DeviceNumber = 1; //Sound card 2
waveOut2.DesiredLatency = 100;
waveOut2.Init(waveProvider);
waveOut2.PlaybackStopped += wavePlayer_PlaybackStopped;
waveOut.Play();
waveOut2.Play();
}
I think the reason this is happening is because when the waveProvider circular buffer is read, the data is deleted so the two read methods are 'fighting' over the data which results in the choppy sound.
So I really have two questions;
1.) I see the Naudio library contains many types of waveStreams (RawSourceWaveStream is particularly interesting) However, I have been unable to find a good example of how to read a single stream with multiple waveOut methods. I have also been unable to create working code using waveStream with multiple outputs. Is anyone familiar with waveStreams and knows if this is something that can be done?
2.) If the Naudio wave streams cannot be used in a single producer multiple consumer situation then I believe I would need to make a circular buffer that is not cleared on a read, but only when the buffer is full and new data is pushed in. The code won't care if the data was read or not it just keeps filling the buffer. Would this be the correct approach?
I've spent days searching so hopefully this hasn't already been asked. Thanks for reading.
If you're just reading from a microphone and want two WaveOut's to play it, then the simple option is to create two BufferedWaveProviders, one for each WaveOut, and then when audio is received, send it to both.
Likewise if you were playing from an audio file to two soundcards, the easiest way is to use two reader objects and start them both separately.
There is unfortunately no easy way to synchronize, short of starting and stopping both players at the same time.
There are a few more advanced ways to try to split off an audio stream to two readers, but there can be complications especially if the two readers are not able to read at roughly the same rate.
I am attempting to create an application similar to the network chat demo in NAudio.
On the receiving/listening end however, I am buffering my received audio packets until I get 2 of them before calling WaveProvider.addSamples(). I have experimented with not buffering at all, buffering a large amount, etc. but the issue remains so I don't think this is the explicit cause.
The issue that I am having is that no matter how fast I attempt to addSamples, the WaveProvider is getting exhausted (BufferedBytes == 0) repeatedly when the audio is playing. I ran some tests, and my debug output goes something like this:
0.690s WaveProvider Empty
0.695s Samples are added to WaveProvider
0.871s WaveProvider Empty
0.875s Samples added to WaveProvider
1.013s WaveProvider Empty
... etc.
the WaveProvider does not display "empty" until a every 150 ms or so, which makes sense as the addSamples call adds data corresponding to 100ms of audio data.
The interesting part is the 4ms to 5ms of time between the moment the WaveProvider is empty and the next addSamples is attempted. No change in buffer size or effort to addSamples faster seems to decrease this gap and thus the audio played back always has these tiny gaps of silence which sound like pops/breaks.
I was wondering if this tiny time delay might be caused by the WaveProvider being locked when the waveOut device is playing from it, and thus I am being locked out from addingSamples to the WaveProvider until it is finished playing? Is there any way to rectify this?
My initialization code is as follows: (they should be normal I hope)
WaveProvider = new BufferedWaveProvider(new WaveFormat(16000, 1));
WaveProvider.BufferDuration = new TimeSpan(0, 0, 10);
AudioPlayer = new WaveOut(WaveCallbackInfo.FunctionCallback());
AudioPlayer.Init(WaveProvider);
I would appreciate any help on the matter.
Thank you
If you are not receiving data as fast as you are playing it, you will have gaps in playback and there is nothing much you can do about it. However, you can improve the experience by automatically going into pause while you buffer (say up to a second) of audio. Then if the buffer runs out, you go into pause again. That way the audio will sound a lot less choppy.
I am using naudio to generate pulse width modulated audio signals for controlling a pair of servo's. Currently I am using the WaveProvider32 class that Mark Heath wrote (http://mark-dot-net.blogspot.com/2009/10/playback-of-sine-wave-in-naudio.html) which implements the IWaveProvider interface. The sample rate is 44100
The audio signal is basically a block N wide where the first part of the signal all the values are high, and for the remainder of the block the values are low. Since the Read operation asks for more samples than the width of the block I just repeat the signal until I fill up the buffer. The problem I have is that the length of the buffer is not a multiple of the width of my signal block, so part of the last block is cut off which screws with the servo and makes it twitch. I realize I could do some code fanciness to keep track of it and offset the beginning of the next read, but I would be easier if I could set the number of values that WaveProvider had to provide at once so that I could make it a multiple (or maybe the exact width) of the signal block size.
Is that possible?
The amount of data requested by the Read function is determined by the IWaveOut implementation you choose, and the latency and number of buffers it is operating at. You would need to create an intermediate IWaveProvider that ensures that Read methods to the underlying provider always ask for the right number. Have a look at the BlockAlignReductionStream which I created for a similar issue.
I'm trying to determine the "beats per minute" from real-time audio in C#. It is not music that I'm detecting in though, just a constant tapping sound. My problem is determining the time between those taps so I can determine "taps per minute" I have tried using the WaveIn.cs class out there, but I don't really understand how its sampling. I'm not getting a set number of samples a second to analyze. I guess I really just don't know how to read in an exact number of samples a second to know the time between to samples.
Any help to get me in the right direction would be greatly appreciated.
I'm not sure which WaveIn.cs class you're using, but usually with code that records audio, you either A) tell the code to start recording, and then at some later point you tell the code to stop, and you get back an array (usually of type short[]) that comprises the data recorded during this time period; or B) tell the code to start recording with a given buffer size, and as each buffer is filled, the code makes a callback to a method you've defined with a reference to the filled buffer, and this process continues until you tell it to stop recording.
Let's assume that your recording format is 16 bits (aka 2 bytes) per sample, 44100 samples per second, and mono (1 channel). In the case of (A), let's say you start recording and then stop recording exactly 10 seconds later. You will end up with a short[] array that is 441,000 (44,100 x 10) elements in length. I don't know what algorithm you're using to detect "taps", but let's say that you detect taps in this array at element 0, element 22,050, element 44,100, element 66,150 etc. This means you're finding taps every .5 seconds (because 22,050 is half of 44,100 samples per second), which means you have 2 taps per second and thus 120 BPM.
In the case of (B) let's say you start recording with a fixed buffer size of 44,100 samples (aka 1 second). As each buffer comes in, you find taps at element 0 and at element 22,050. By the same logic as above, you'll calculate 120 BPM.
Hope this helps. With beat detection in general, it's best to record for a relatively long time and count the beats through a large array of data. Trying to estimate the "instantaneous" tempo is more difficult and prone to error, just like estimating the pitch of a recording is more difficult to do in realtime than with a recording of a full note.
I think you might be confusing samples with "taps."
A sample is a number representing the height of the sound wave at a given moment in time. A typical wave file might be sampled 44,100 times a second, so if you have two channels for stereo, you have 88,200 sixteen-bit numbers (samples) per second.
If you take all of these numbers and graph them, you will get something like this:
(source: vbaccelerator.com)
What you are looking for is this peak ------------^
That is the tap.
Assuming we're talking about the same WaveIn.cs, the constructor of WaveLib.WaveInRecorder takes a WaveLib.WaveFormat object as a parameter. This allows you to set the audio format, ie. samples rate, bit depth, etc. Just scan the audio samples for peaks or however you're detecting "taps" and record the average distance in samples between peaks.
Since you know the sample rate of the audio stream (eg. 44100 samples/second), take your average peak distance (in samples), multiply by 1/(samples rate) to get the time (in seconds) between taps, divide by 60 to get the time (in minutes) between taps, and invert to get the taps/minute.
Hope that helps