Synchronous WAV playback in NAudio - c#

I'm making a game music player in C# based on NAudio. The original PCM-16 four-channel audio file (Wii AST) is brought into the application, and the channels are split into two separate stereo wav files. I have a slider in my application to allow you to fade between the two files, because each pair of channels from the original AST file contains a different part/version of the song.
I'm using two instances of WaveOut and AudioFileReader for the two wave files to play them back, and I have a trackbar in the application to adjust the "channel bias" (in other words, the volume of the WaveOut instances). My problem is that a lot of the time (especially when seeking using the trackbar), the two audio files end up playing out-of-sync. I can't seem to get them to play at the exact same time. Does anyone know how to get the files to play in sync?
The only solution I can think of is to convert the two split files into a single 4-channel wave file so you could just adjust the volume of the channel pairs and not have to worry about syncing, but NAudio doesn't seem to have a way to do that.
Please keep in mind that I am new to asking questions on StackOverflow, so if I need to provide more detail or if there was something I failed to explain, please let me know. And please, be constructive.
Thanks!

So, as it turns out, I completely overlooked a class in NAudio that mixes multiple audio streams together. What I ended up doing was creating an instance of MixingWaveProvider32 at the beginning of the program and initializing it when the audio files were created by the decoder to prepare for playback. Here was the final code I came up with that ended up working:
MixingWaveProvider32 mixingWaveProvider32;
WaveOut masterOut;
AudioFileReader audioFileReader;
AudioFileReader audioFileReader2;
...
void loadAstFile(string path)
{
string stream1_Path = path + "_c1.wav";
string stream2_Path = path + "_c2.wav";
DisposeAudioFileReaders();
AstReader astReader = new AstReader(path);
if(astReader.Channels == 4)
{
astReader.Export(stream1_Path, 1.0f);
astReader.Export(stream2_Path, 0.0f);
audioFileReader = new AudioFileReader(stream1_Path);
audioFileReader2 = new AudioFileReader(stream2_Path);
mixingWaveProvider32 = new MixingWaveProvider32(new IWaveProvider[] { audioFileReader, audioFileReader2 });
}
else
{
astReader.Export(stream1_Path, 1.0f);
audioFileReader = new AudioFileReader(stream1_Path);
audioFileReader.Volume = 1.0f;
mixingWaveProvider32 = new MixingWaveProvider32(new IWaveProvider[] { audioFileReader});
}
masterOut = new WaveOut();
masterOut.Init(mixingWaveProvider32);
masterOut.Volume = 1.0f;
RefreshComponents();
Play();
}
Hopefully this helps somebody out there!

Related

C# NAudio: How to access the samples provided by SignalGenerator in order to save them to WAVE format?

I have the following constructor, which successfully plays pink noise directly to my audio output device:
public WAVEManager(string inputFileName, string outputFileName)
{
IWavePlayer outputDevice;
outputDevice = new WaveOutEvent();
SignalGenerator pinkNoiseGenerator = new SignalGenerator();
pinkNoiseGenerator.Type = SignalGeneratorType.Pink;
outputDevice.Init(pinkNoiseGenerator);
outputDevice.Play();
// Wait for 10 seconds
System.Threading.Thread.Sleep(10000);
}
This all works fine. I understand that now if I want to write to a .wav, I have to initialise a WaveFileWriter like so:
WaveFileWriter writer = new WaveFileWriter(outputFileName, pinkNoiseGenerator.WaveFormat);
And then write to the created WAVE file:
writer.WriteData(buffer, 0, numSamples);
The rub being that I have no idea how to populate the buffer directly from pinkNoiseGenerator. I have searched through the documentation and examples and can't find anything to do with this - I imagine it must involve the .Read() method of the SignalGenerator class, but as the generator plays indefinitely, it has no defined length. To me, this means that the buffer can't be populated in the same way it could if we were, say, reading directly from an input WAVE file (as far as I can tell).
Could someone please point me in the right direction?
Thanks.
Here's how you can create a WAV file containing 10 seconds of pink noise:
var pinkNoiseGenerator = new SignalGenerator();
pinkNoiseGenerator.Type = SignalGeneratorType.Pink;
WaveFileWriter.CreateWaveFile16("pinkNoise.wav", pinkNoiseGenerator.Take(TimeSpan.FromSeconds(10)));

NAudio Loopback Record Crackle Sound Error

I'm recording sound via the WasapiLoopbackCapture and write it to an MP3-File via the NAudio.Lame lib:
LAMEPreset quality = LAMEPreset.ABR_320;
audiostream = new WasapiLoopbackCapture();
audiostream.DataAvailable += stream_DataAvailable;
audiostream.RecordingStopped += stream_RecordingStopped;
mp3writer = new LameMP3FileWriter(Environment.GetEnvironmentVariable("USERPROFILE") + #"\Music\record_temp.mp3",
audiostream.WaveFormat, quality);
audiostream.StartRecording();
When the user presses the stop-recording-button, I save the MP3 and stop the recording:
mp3writer.Flush();
audiostream.Dispose();
mp3writer.Dispose();
All works fine, except that the output file has some disturbing crackle noises in it. (See here for example). I think it might be the case, that my computer is a bit to slow to do the process of compressing and writing the audio data in realtime, so some of the values get lost, but that is just my guess
Edit: When recording to WAVE, the errors dont appear.
What may be the problem here and how could I possibly solve it / work around it?
start off by saving your audio to a WAV file. Does that have crackles in it? If so the crackles are coming from the soundcard. If not, they are coming from the encoding to MP3 code.

Using Naudio to amplify Microphone Input

I have played with the Naudio examples, and am able to amplify a WAV file that is opened in using the "WaveFileStream" function. (Code Example: AudioPlaybackPanel) This works fine:
I add a variable declaration, so I can access the channel later:
SampleChannel waveFromFile;
And in the existing function, I set it:
private ISampleProvider CreateInputStream(string fileName)
{
...
this.fileWaveStream = plugin.CreateWaveStream(fileName);
var waveChannel = new SampleChannel(this.fileWaveStream, true);
waveFromFile = waveChannel;
...
}
Then I add an AMPLIFY button, and this works just as I expect:
float ampFactor = 1.0f;
private void ampButton_Click(object sender, EventArgs e)
{
ampFactor += 2;
if (ampFactor >= 9.0f)
ampFactor = 1.0f;
waveFromFile.Volume = ampFactor;
}
But how can I do this when the input is not a WAV file, but instead is a microphone?
If I am looking at the NAudio examples, and try to add this code to the "RecordingPanel" demo, and it is being ignored -- meaning I put the value in the Volume, but there is no change.
Is it possible to amplify the audio coming in from the microphone? And, if so, can someone show a sample code snippet? I've looked online, but I can't seem to find it.
To take advantage of SampleChannel's ability to modify samples, you'd actually need to pull the audio through the SampleChannel. To do that, you could put the recorded audio into a BufferedWaveProvider and then put that into SampleChannel. Then you'd need to make sure you pulled enough audio back out of the SampleChannel and into the WaveFileWriter so it doesn't fill up. You may also want to use a SampleToWaveProvider16 if you wanted a 16 bit WAV file.

SoundPlayer and playing a seamless succession of sounds

What I'm trying to do is play a looping wav, but with a specific loop start position like the samples used in music module files (.it, s3m. .mod, etc.). SoundPlayer doesn't have this feature, so I created two separate files: the original wav and the wav with the beginning cut off to where the loop starts. I want to play the two files in succession, seamlessly, first the original wav file with PlaySync() then the loop section with PlayLooping().
Unfortunately after I PlaySync() the first wav, there's a split second interruption between it and when PlayLooping() begins to play back the loop wav.
private static void PlayThread(SoundPlayer soundPlayer, byte[] wav, byte[] loop)
{
MemoryStream stream1 = new MemoryStream(wav);
MemoryStream stream2 = new MemoryStream(loop);
soundPlayer.Stream = stream1;
soundPlayer.PlaySync();
// here is where the split second interruption occurs
soundPlayer.Stream = stream2;
soundPlayer.PlayLooping();
}
This may not be noticeable when playing two separate distinct sounds, but is very noticeable when trying to pull off the looping mechanism I want. The sound wavs are instrument samples, very small (usually no more than 0x1000 bytes).
Instead of attempting to play multiple SoundPlayer objects consecutively, try concatenating the .wav files on the fly, and loading the merged .wav file into one SoundPlayer object.
Download this WaveIO class, add it to your project, and try the following code where you want to seamlessly play multiple sounds in a row:
string[] files = new string[3] { #"filepath1.wav", #"filepath2.wav", #"filepath3.wav" };
WaveIO wa = new WaveIO();
wa.Merge(files, #"tempfile.wav");
System.Media.SoundPlayer sp = new System.Media.SoundPlayer(#"tempfile.wav");
sp.Play();
I don't know anything about SoundPlayer, but perhaps eliminate lines 3+4 and instead write your second wave to the first stream instead of creating a new stream. That way you are in essence adding more data to the stream for it to play. Assuming you can write to it faster than it can play it, which you should be able to. I'm not sure off the top of my head if a MemoryStream will allow simultaneous read/write. I have done something similar but dont have the code on hand and it might have been slightly different.
Also, is Play() blocking? I.e. does the next line not run until it is done playing. If so, then the pause is when line 3 is reading the wav file. So you need to move .Play to another thread so that you can begin loading the next wave while the first wave is playing. Even if you find that you can't write to the first memory stream, and have to create a new memory stream, at least you will have the wav file loaded into the memory stream and ready to play as soon as the first finishes. This will significantly reduce the pause.
Consider media palyers that have a playlist of MP3s. When one MP3 is close to being finished, the media player usually will go ahead and load the next mp3 from disk and load it into memory before the first is finished playing.
Here's what my test looked like:
string sMediaPath = #"C:\Windows\Media";
SoundPlayer soundPlayer = new SoundPlayer();
soundPlayer.Stream = File.OpenRead(Path.Combine(sMediaPath, "chimes.wav"));
soundPlayer.PlaySync();
soundPlayer.Stream = File.OpenRead(Path.Combine(sMediaPath, "chord.wav"));
soundPlayer.PlaySync();
I found that this test worked exactly as you wanted - with no abrupt stops between the files. This could be due to other reasons though, most likely to do with the files themselves
Bit rate: what is the sampling rate of the file?
Is the sound file interleaved (stereo)?
How large is the sound file?
Perhaps preloading your sounds before playing them might make a difference?
string sMediaPath = #"C:\Windows\Media";
SoundPlayer soundPlayer = new SoundPlayer();
List<Stream> oSoundStreams = new List<Stream>();
oSoundStreams.Add(File.OpenRead(Path.Combine(sMediaPath, "chimes.wav")));
oSoundStreams.Add(File.OpenRead(Path.Combine(sMediaPath, "chord.wav")));
soundPlayer.Stream = oSoundStreams[0];
soundPlayer.PlaySync();
soundPlayer.Stream = oSoundStreams[1];
soundPlayer.PlaySync();
Edit:
By creating one SoundPlayer object for each sound to be played, it seems possible to bring down the short gap you're experiencing:
string sMediaPath = #"C:\Users\Foogle\Desktop\";
SoundPlayer soundPlayer1 = new SoundPlayer();
SoundPlayer soundPlayer2 = new SoundPlayer();
soundPlayer1.Stream = File.OpenRead(Path.Combine(sMediaPath, "W.wav"));
soundPlayer1.Load();
soundPlayer2.Stream = File.OpenRead(Path.Combine(sMediaPath, "P.wav"));
soundPlayer2.Load();
for (int i = 0; i < 10; i++)
{
soundPlayer1.PlaySync();
soundPlayer2.PlaySync();
}
Perhaps you could create a collection of SoundPlayer objects and play them as required?
Hope this helps.
Cheers!

Sound quality issues when using NAudio to play several wav files at once

My objective is this: to allow users of my .NET program to choose their own .wav files for sound effects. These effects may be played simultaneously. NAudio seemed like my best bet.
I decided to use WaveMixerStream32. One early challenge was that my users had .wav files of different formats, so to be able to mix them together with WaveMixerStream32, I needed to "normalize" them to a common format. I wasn't able to find a good example of this to follow so I suspect my problem is a result of my doing this part wrong.
My problem is that when some sounds are played, there are very noticeable "clicking" sounds at their end. I can reproduce this myself.
Also, my users have complained that sometimes, sounds aren't played at all, or are "scratchy" all the way through. I haven't been able to reproduce this in development but I have heard this for myself in our production environment.
I've played the user's wav files myself using Windows Media and VLC, so I know the files aren't corrupt. It must be a problem with how I'm using them with NAudio.
My NAudio version is v1.4.0.0.
Here's the code I used. To set up the mixer:
_mixer = new WaveMixerStream32 { AutoStop = false, };
_waveOutDevice = new WaveOut(WaveCallbackInfo.NewWindow())
{
DeviceNumber = -1,
DesiredLatency = 300,
NumberOfBuffers = 3,
};
_waveOutDevice.Init(_mixer);
_waveOutDevice.Play();
Surprisingly, if I set "NumberOfBuffers" to 2 here I found that sound quality was awful, with audible "ticks" occurring several times a second.
To initialize a sound file, I did this:
var sample = new AudioSample(fileName);
sample.Position = sample.Length; // To prevent the sample from playing right away
_mixer.AddInputStream(sample);
AudioSample is my class. Its constructor is responsible for the "normalization" of the wav file format. It looks like this:
private class AudioSample : WaveStream
{
private readonly WaveChannel32 _channelStream;
public AudioSample(string fileName)
{
MemoryStream memStream;
using (var fileStream = File.OpenRead(fileName))
{
memStream = new MemoryStream();
memStream.SetLength(fileStream.Length);
fileStream.Read(memStream.GetBuffer(), 0, (int)fileStream.Length);
}
WaveStream originalStream = new WaveFileReader(memStream);
var pcmStream = WaveFormatConversionStream.CreatePcmStream(originalStream);
var blockAlignReductionStream = new BlockAlignReductionStream(pcmStream);
var waveFormatConversionStream = new WaveFormatConversionStream(
new WaveFormat(44100, blockAlignReductionStream.WaveFormat.BitsPerSample, 2), blockAlignReductionStream);
var waveOffsetStream = new WaveOffsetStream(waveFormatConversionStream);
_channelStream = new WaveChannel32(waveOffsetStream);
}
Basically, the AudioSample delegates to its _channelStream object. To play an AudioSample, my code sets its "Position" to 0. This code that does this is marshalled onto the UI thread.
This almost works great. I can play multiple sounds simultaneously. Unfortunately the sound quality is bad as described above. Can anyone help me figure out why?
Some points in response to your question:
Yes, you have to have all inputs at the same sample rate before you feed them into a mixer. This is simply how digital audio works. The ACM sample rate conversion provided by WaveFormatConversion stream isn't brilliant (has no aliasing protection). What sample rates are your input files typically at?
You are passing every input through two WaveFormatConversionStreams. Only do this if it is absolutely necessary.
I'm surprised that you are getting bad sound with NumberOfBuffers=2, which is now the default in NAudio. Have you been pausing and resuming, because there was a bug where a buffer could get dropped (fixed in the latest and will be fixed for NAudio 1.4 final)
A click at the end of a file can just mean it doesn't end on a zero sample. You would have to add a fade out to eliminate this (a lot of media players do this automatically)
Whenever you are troubleshooting a bad sound issue, I always recommend using WaveFileWriter to convert your WaveStream into a WAV file (taking care not to produce a never ending file!), so you can listen to it in another media player. This allows you to quickly determine whether it is your audio processing that is causing the problem, or the playback itself.

Categories

Resources