NAudio Loopback Record Crackle Sound Error - c#

I'm recording sound via the WasapiLoopbackCapture and write it to an MP3-File via the NAudio.Lame lib:
LAMEPreset quality = LAMEPreset.ABR_320;
audiostream = new WasapiLoopbackCapture();
audiostream.DataAvailable += stream_DataAvailable;
audiostream.RecordingStopped += stream_RecordingStopped;
mp3writer = new LameMP3FileWriter(Environment.GetEnvironmentVariable("USERPROFILE") + #"\Music\record_temp.mp3",
audiostream.WaveFormat, quality);
audiostream.StartRecording();
When the user presses the stop-recording-button, I save the MP3 and stop the recording:
mp3writer.Flush();
audiostream.Dispose();
mp3writer.Dispose();
All works fine, except that the output file has some disturbing crackle noises in it. (See here for example). I think it might be the case, that my computer is a bit to slow to do the process of compressing and writing the audio data in realtime, so some of the values get lost, but that is just my guess
Edit: When recording to WAVE, the errors dont appear.
What may be the problem here and how could I possibly solve it / work around it?

start off by saving your audio to a WAV file. Does that have crackles in it? If so the crackles are coming from the soundcard. If not, they are coming from the encoding to MP3 code.

Related

Saving Bigger Files with TTS Android

Recently i used android TTS - I save the file as MP3 and play it using MediaPlayer so users can pause/resume etc.
It all works fine other than when i have a large text it just does not work.
I read that the android TTS has the limit of 4000 CHs? What should i do to tackle large amount of text?
The following is the code i am using to save MP3
Android.Speech.Tts.TextToSpeech textToSpeech;
...
textToSpeech = new Android.Speech.Tts.TextToSpeech(this, this, "com.google.android.tts");
...
textToSpeech.SynthesizeToFile(ReadableText, null, new Java.IO.File(System.IO.Path.Combine(documentsPath, ID + "_audio.mp3")), ID);
The following is the code i am using to playback the audio
MediaPlayer MP = new MediaPlayer();
...
MP.SetDataSource(System.IO.Path.Combine(documentsPath, ID + "_audio.mp3"));
MP.Prepare();
MP.Start();
It works for small amount of text but not for large text.
File gets saved (most likely just a corrupt file) because when i play it i get the following error
setDataSoruceFD failed: status=0x80000000
Java Solution is also acceptable
FYI - The question is about the max text size as I can generate the file for smaller text
Cheers
In Android ASOP (at least since API-18), TextToSpeech.MaxSpeechInputLength is set to 4000.
Note: OEMs could change this value in their OS image, so it would be wise to check the value and not make any assumptions.
Note: You are naming the output with an .mp3 extension, but by default the files created will be .wav formatted, some speech engines do support other formats/bitrate/etc. but you are passing null for the parameters.
Unless you want to deal with properly joining multiple wave files, I would recommend that you break your text into smaller parts and synthesize multiple files.
You can then play these back in sequence (using the MediaPlayer Completion event|listener).

mix microphone with mp3 file and output that to specific device

I'd like to be able to mix the microphone output with a mp3-File, and output that to a specific device.
I got playing the mp3-File to a specific device working:
Mp3FileReader reader = new Mp3FileReader("C:\\Users\\Victor\\Music\\Musik\\Attack.mp3");
var waveOut = new WaveOut();// or WaveOutEvent()
waveOut.DeviceNumber = deviceId; //deviceId, like 0 or 1
waveOut.Init(reader);
waveOut.Play();
So would I would like to be able to do is basically send the microphone output always to specific output and overlay that output to that specific device with the sound of a mp3-file when for example a button is pressed.
Now is what I'm trying to do possible with naudio and if so how would I go about it?
Thanks!
The basic strategy is to put the audio rececived from the microphone into a BufferedWaveProvider. Then turn that into an ISampleProvider with the ToSampleProvider extension method. Now you can pass that into a MixingSampleProvider. Then play from the MixingSampleProvider. Now at any time you can mix in other audio by adding an input into the MixingSampleProvider

Reducing channel count when recording in NAudio

I'm recording in NAudio with a PS3Eye camera, using CLEye drivers.
The camera has a 4 microphone array, and presents 4 channels of audio to the system.
By default, all of the channels are being recorded by NAudio. I'm recording to PCM wave, and getting a 4-channel WAV output file.
When I try to play the file in NAudio, I receive an MmException 'NoDriver' calling acmFormatSuggest. Stereo files play fine.
My sound card can only output 2 channels, which appears to cause the error. Setting my Windows audio settings to Quadraphonic does not resolve this issue.
Perhaps I can ask NAudio to record only 2 channels, or implement my own WaveStream somewhere?
Does anybody have any ideas for down-sampling the number of channels in NAudio? (preferably at record time). Big thanks!
isn't it as simple as declaring it in your WaveFormat? 12000 is the sample rate and "1" is the number of channels.
waveInStream = new WaveIn(WaveCallbackInfo.FunctionCallback());
waveInStream.WaveFormat = new WaveFormat(12000, 1);
waveInStream.DeviceNumber = 2;
AudioWriter = new WaveFileWriter("c:\\MyAudio.wav", waveInStream.WaveFormat);
waveInStream.DataAvailable += waveInStream_DataAvailable;
waveInStream.StartRecording();
thats how i setup my code for a basic webcam recorder and with stereo mics it outputs mono wav files.

Sound quality issues when using NAudio to play several wav files at once

My objective is this: to allow users of my .NET program to choose their own .wav files for sound effects. These effects may be played simultaneously. NAudio seemed like my best bet.
I decided to use WaveMixerStream32. One early challenge was that my users had .wav files of different formats, so to be able to mix them together with WaveMixerStream32, I needed to "normalize" them to a common format. I wasn't able to find a good example of this to follow so I suspect my problem is a result of my doing this part wrong.
My problem is that when some sounds are played, there are very noticeable "clicking" sounds at their end. I can reproduce this myself.
Also, my users have complained that sometimes, sounds aren't played at all, or are "scratchy" all the way through. I haven't been able to reproduce this in development but I have heard this for myself in our production environment.
I've played the user's wav files myself using Windows Media and VLC, so I know the files aren't corrupt. It must be a problem with how I'm using them with NAudio.
My NAudio version is v1.4.0.0.
Here's the code I used. To set up the mixer:
_mixer = new WaveMixerStream32 { AutoStop = false, };
_waveOutDevice = new WaveOut(WaveCallbackInfo.NewWindow())
{
DeviceNumber = -1,
DesiredLatency = 300,
NumberOfBuffers = 3,
};
_waveOutDevice.Init(_mixer);
_waveOutDevice.Play();
Surprisingly, if I set "NumberOfBuffers" to 2 here I found that sound quality was awful, with audible "ticks" occurring several times a second.
To initialize a sound file, I did this:
var sample = new AudioSample(fileName);
sample.Position = sample.Length; // To prevent the sample from playing right away
_mixer.AddInputStream(sample);
AudioSample is my class. Its constructor is responsible for the "normalization" of the wav file format. It looks like this:
private class AudioSample : WaveStream
{
private readonly WaveChannel32 _channelStream;
public AudioSample(string fileName)
{
MemoryStream memStream;
using (var fileStream = File.OpenRead(fileName))
{
memStream = new MemoryStream();
memStream.SetLength(fileStream.Length);
fileStream.Read(memStream.GetBuffer(), 0, (int)fileStream.Length);
}
WaveStream originalStream = new WaveFileReader(memStream);
var pcmStream = WaveFormatConversionStream.CreatePcmStream(originalStream);
var blockAlignReductionStream = new BlockAlignReductionStream(pcmStream);
var waveFormatConversionStream = new WaveFormatConversionStream(
new WaveFormat(44100, blockAlignReductionStream.WaveFormat.BitsPerSample, 2), blockAlignReductionStream);
var waveOffsetStream = new WaveOffsetStream(waveFormatConversionStream);
_channelStream = new WaveChannel32(waveOffsetStream);
}
Basically, the AudioSample delegates to its _channelStream object. To play an AudioSample, my code sets its "Position" to 0. This code that does this is marshalled onto the UI thread.
This almost works great. I can play multiple sounds simultaneously. Unfortunately the sound quality is bad as described above. Can anyone help me figure out why?
Some points in response to your question:
Yes, you have to have all inputs at the same sample rate before you feed them into a mixer. This is simply how digital audio works. The ACM sample rate conversion provided by WaveFormatConversion stream isn't brilliant (has no aliasing protection). What sample rates are your input files typically at?
You are passing every input through two WaveFormatConversionStreams. Only do this if it is absolutely necessary.
I'm surprised that you are getting bad sound with NumberOfBuffers=2, which is now the default in NAudio. Have you been pausing and resuming, because there was a bug where a buffer could get dropped (fixed in the latest and will be fixed for NAudio 1.4 final)
A click at the end of a file can just mean it doesn't end on a zero sample. You would have to add a fade out to eliminate this (a lot of media players do this automatically)
Whenever you are troubleshooting a bad sound issue, I always recommend using WaveFileWriter to convert your WaveStream into a WAV file (taking care not to produce a never ending file!), so you can listen to it in another media player. This allows you to quickly determine whether it is your audio processing that is causing the problem, or the playback itself.

Silverlight MediaElement video issue

Cant display the video in Silverlight.Which i saved in sql server and retrieve it from database as byte[] and subsequently convert it as Stream and put as SetSource of Media element.But cant display anything.
Plz help me.
Like this:
MediaElement SoundClip = new MediaElement();
SoundClip.SetSource(stream);
SoundClip.AutoPlay = false;
SoundClip.Width = 500;
SoundClip.Height = 500;
SoundClip.Stretch = Stretch.Fill;
this.LayoutRoot.Children.Add(SoundClip);
SoundClip.Play();
But does not work.
EDIT:
It is in wmv format.
I could not even play a wmv/wma file from local drive.Is there any issue with the PC i m using.It just runs the code but does play it.does not show any error.
Any suggestion?
What format is the media?
Can it be played directly as a WMV, WMA, MP3, or H.264 file?
Is the media file completely downloaded?
Monitor the HTTP traffic using Fiddler to see what's happening
Attach a MediaElement.MediaFailed event to the SoundClip to see the error (if any)

Categories

Resources