Compress an Audio WAV with NAudio - Error AcmNotPossible calling acmStreamOpen - c#

I have to Compress an Audio WAV with the best codec possible using NAudio.
I use WaveFormatConversionStream, but I always get this error : "AcmNotPossible calling acmStreamOpen"
I had read a lot of answer about this error , but I don't find the solution.
Here's my code,Where Am I wrong ?
All help would be nice and welcome :)
private void InvokeOnNewAudioData(byte[] data, AudioFormat audioFormat)
{
WaveFormat waveFormat = new WaveFormat(audioFormat.NumberSamplesPerSec, audioFormat.NumberBitsPerSample, audioFormat.NumberChannels);
WaveFormat targetFormat = WaveFormat.CreateCustomFormat(WaveFormatEncoding.Vorbis1,
22000, //SampleRate
audioFormat.NumberChannels, //Channels
48000, //Average Bytes per Second
2, //Block Align
16); //Bits per Sample
using (MemoryStream dataStream = new MemoryStream(data))
{
using (WaveStream inputStream = new RawSourceWaveStream(dataStream, waveFormat))
{
try
{
using (WaveFormatConversionStream converter = new WaveFormatConversionStream(targetFormat, inputStream))
{
}
}
catch (Exception)
{
throw;
}
}
}
}

This means that there is no ACM codec on your system that can perform the requested conversion. You can use the NAudioDemo app that comes with NAudio to examine all the ACM codecs you have installed on your system and their supported input and output formats. Windows certainly doesn't come with a Vorbis ACM codec, which is probably why your code doesn't work. Even if you had installed a Vorbis ACM codec, you need to get the WaveFormat exactly right or you will get the ACM not possible error.
You'd probably be better off trying to use the the NAudio support that comes with NVorbis in any case.

Related

Playing from wav stream

var result = service.Synthesize(
text: text,
accept: "audio/wav",
voice: "en-US_AllisonVoice"
//voice: "en-US_HenryV3Voice"
);
using (FileStream fs = File.Create(#"C:\Users\nkk01\Desktop\voice.wav"))
{
result.Result.WriteTo(fs);
fs.Close();
result.Result.Close();
}
var waveStream = new WaveFileReader(#"C:\Users\nkk01\Desktop\voice.wav");
var waveOut = new WaveOutEvent();
waveOut.Init(waveStream);
Console.WriteLine("Playing");
waveOut.Play();
Console.WriteLine("Finished playing");
Hi this is my current code which is extremely inefficient as it has to save an audio stream to a file to play it. I would like to pass the audio stream directly to my laptop's speaker using the NAudio library. I still have not managed to find a solution. It will be of great help, thanks.
i'm not familiar with naudio but as far as i can see from their git hub repo, the constructor for WaveFileReader also accepts a stream instead of a filename ...
simply try writing everything you have to a memory stream instead of a file... don't forget to reposition the memory stream before you read it ('seek' to offset 0)

Can .NET audio streams be split by L/R channel? [duplicate]

I am using System.Speech.Synthesis.SpeechSynthesizer to convert text to speech. And due to Microsoft's anemic documentation (see my link, there's no remarks or code examples) I'm having trouble making heads or tails of the difference between two methods:
SetOutputToAudioStream and SetOutputToWaveStream.
Here's what I have deduced:
SetOutputToAudioStream takes a stream and a SpeechAudioFormatInfo instance that defines the format of the wave file (samples per second, bits per second, audio channels, etc.) and writes the text to the stream.
SetOutputToWaveStream takes just a stream and writes a 16 bit, mono, 22kHz, PCM wave file to the stream. There is no way to pass in SpeechAudioFormatInfo.
My problem is SetOutputToAudioStream doesn't write a valid wave file to the stream. For example I get a InvalidOperationException ("The wave header is corrupt") when passing the stream to System.Media.SoundPlayer. If I write the stream to disk and attempt to play it with WMP I get a "Windows Media Player cannot play the file..." error but the stream written by SetOutputToWaveStream plays properly in both. My theory is that SetOutputToAudioStream is not writing a (valid) header.
Strangely the naming conventions for the SetOutputTo*Blah* is inconsistent. SetOutputToWaveFile takes a SpeechAudioFormatInfo while SetOutputToWaveStream does not.
I need to be able to write a 8kHz, 16-bit, mono wave file to a stream, something that neither SetOutputToAudioStream or SetOutputToWaveStream allow me to do. Does anybody have insight into SpeechSynthesizer and these two methods?
For reference, here's some code:
Stream ret = new MemoryStream();
using (SpeechSynthesizer synth = new SpeechSynthesizer())
{
synth.SelectVoice(voiceName);
synth.SetOutputToWaveStream(ret);
//synth.SetOutputToAudioStream(ret, new SpeechAudioFormatInfo(8000, AudioBitsPerSample.Sixteen, AudioChannel.Mono));
synth.Speak(textToSpeak);
}
Solution:
Many thanks to #Hans Passant, here is the gist of what I'm using now:
Stream ret = new MemoryStream();
using (SpeechSynthesizer synth = new SpeechSynthesizer())
{
var mi = synth.GetType().GetMethod("SetOutputStream", BindingFlags.Instance | BindingFlags.NonPublic);
var fmt = new SpeechAudioFormatInfo(8000, AudioBitsPerSample.Sixteen, AudioChannel.Mono);
mi.Invoke(synth, new object[] { ret, fmt, true, true });
synth.SelectVoice(voiceName);
synth.Speak(textToSpeak);
}
return ret;
For my rough testing it works great, though using reflection is a bit icky it's better than writing the file to disk and opening a stream.
Your code snippet is borked, you're using synth after it is disposed. But that's not the real problem I'm sure. SetOutputToAudioStream produces the raw PCM audio, the 'numbers'. Without a container file format (headers) like what's used in a .wav file. Yes, that cannot be played back with a regular media program.
The missing overload for SetOutputToWaveStream that takes a SpeechAudioFormatInfo is strange. It really does look like an oversight to me, even though that's extremely rare in the .NET framework. There's no compelling reason why it shouldn't work, the underlying SAPI interface does support it. It can be hacked around with reflection to call the private SetOutputStream method. This worked fine when I tested it but I can't vouch for it:
using System.Reflection;
...
using (Stream ret = new MemoryStream())
using (SpeechSynthesizer synth = new SpeechSynthesizer()) {
var mi = synth.GetType().GetMethod("SetOutputStream", BindingFlags.Instance | BindingFlags.NonPublic);
var fmt = new SpeechAudioFormatInfo(8000, AudioBitsPerSample.Eight, AudioChannel.Mono);
mi.Invoke(synth, new object[] { ret, fmt, true, true });
synth.Speak("Greetings from stack overflow");
// Testing code:
using (var fs = new FileStream(#"c:\temp\test.wav", FileMode.Create, FileAccess.Write, FileShare.None)) {
ret.Position = 0;
byte[] buffer = new byte[4096];
for (;;) {
int len = ret.Read(buffer, 0, buffer.Length);
if (len == 0) break;
fs.Write(buffer, 0, len);
}
}
}
If you're uncomfortable with the hack then using Path.GetTempFileName() to temporarily stream it to a file will certainly work.

Playing a wav file after processing it with NAudio

I've converted a double array output[] to a wav file using NAudio. The file plays ok in VLC player and Windows Media Player, but when I try to open it in Winamp, or access it in Matlab using wavread() I fail.. (in Matlab I get the error: " Invalid Wave File. Reason: Incorrect chunk size information in WAV file." , which pretty obviously means something's wrong with the header). Any ideas on how to solve this? Here's the code for converting the array to a WAV:
float[] floatOutput = output.Select(s => (float)s).ToArray();
WaveFormat waveFormat = new WaveFormat(16000, 16, 1);
WaveFileWriter writer = new WaveFileWriter("C:\\track1.wav", waveFormat);
writer.WriteSamples(floatOutput, 0, floatOutput.Length);
You must dispose your WaveFileWriter so it can properly fix up the WAV file header. A using statement is the best way to do this:
float[] floatOutput = output.Select(s => (float)s).ToArray();
WaveFormat waveFormat = new WaveFormat(16000, 16, 1);
using (WaveFileWriter writer = new WaveFileWriter("C:\\track1.wav", waveFormat))
{
writer.WriteSamples(floatOutput, 0, floatOutput.Length);
}

How to convert any audio format to mp3 using NAudio

public void AudioConvert()
{
FileStream fs = new FileStream(InputFileName, FileMode.Open, FileAccess.Read);
NAudio.Wave.WaveFormat format = new NAudio.Wave.WaveFormat();
NAudio.Wave.WaveStream rawStream = new RawSourceWaveStream(fs, format);
NAudio.Wave.WaveStream wsDATA = WaveFormatConversionStream.CreatePcmStream(rawStream);
WaveStream wsstream = wst.CanConvertPcmToMp3(2, 44100);
.....
}
// Here is the class
public class WaveFormatConversionStreamTests
{
public WaveStream CanConvertPcmToMp3(int channels,int sampleRate)
{
WaveStream ws = CanCreateConversionStream(
new WaveFormat(sampleRate, 16, channels),
new Mp3WaveFormat(sampleRate, channels, 0, 128000/8));
return ws;
}
}
Here, i am trying to convert any audio format to mp3 but my code is throwing exception like "ACMNotPossible" at ConvertPCMToMp3 function call. I am using NAudio 1.6 version dll. Right now i am working on windows 7. Please tell me where i went wrong in this code.
WaveFormatConversionStream is a wrapper around the Windows ACM APIs, so you can only use it to make MP3s if you have an ACM MP3 encoder installed. Windows does not ship with one of these. The easiest way to make MP3s is simply to use LAME.exe. I explain how to do this in C# in this article.
Also, if you are using the alpha of NAudio 1.7 and are on Windows 8 then you might be able to use the MP3 encoder which seems to come with Windows 8 as a Media Foundation Transform. Use the MediaFoundationEncoder (the NAudio WPF demo shows how to do this).

change wav file ( to 16KHz and 8bit ) with using NAudio

I want to change a WAV file to 8KHz and 8bit using NAudio.
WaveFormat format1 = new WaveFormat(8000, 8, 1);
byte[] waveByte = HelperClass.ReadFully(File.OpenRead(wavFile));
Wave
using (WaveFileWriter writer = new WaveFileWriter(outputFile, format1))
{
writer.WriteData(waveByte, 0, waveByte.Length);
}
but when I play the output file, the sound is only sizzle. Is my code is correct or what is wrong?
If I set WaveFormat to WaveFormat(44100, 16, 1), it works fine.
Thanks.
A few pointers:
You need to use a WaveFormatConversionStream to actually convert from one sample rate / bit depth to another - you are just putting the original audio into the new file with the wrong wave format.
You may also need to convert in two steps - first changing the sample rate, then changing the bit depth / channel count. This is because the underlying ACM codecs can't always do the conversion you want in a single step.
You should use WaveFileReader to read your input file - you only want the actual audio data part of the file to get converted, but you are currently copying everything including the RIFF chunks as though they were audio data into the new file.
8 bit PCM audio usually sounds horrible. Use 16 bit, or if you must have 8 bit, use G.711 u-law or a-law
Downsampling audio can result in aliasing. To do it well you need to implement a low-pass filter first. This unfortunately isn't easy, but there are sites that help you generate the coefficients for a Chebyshev low pass filter for the specific downsampling you are doing.
Here's some example code showing how to convert from one format to another. Remember that you might need to do the conversion in multiple steps depending on the format of your input file:
using (var reader = new WaveFileReader("input.wav"))
{
var newFormat = new WaveFormat(8000, 16, 1);
using (var conversionStream = new WaveFormatConversionStream(newFormat, reader))
{
WaveFileWriter.CreateWaveFile("output.wav", conversionStream);
}
}
The following code solved my problem dealing with G.711 Mu-Law with a vox file extension to wav file. I kept getting a "No RIFF Header" error with WaveFileReader otherwise.
FileStream fileStream = new FileStream(fileName, FileMode.Open);
var waveFormat = WaveFormat.CreateMuLawFormat(8000, 1);
var reader = new RawSourceWaveStream(fileStream, waveFormat);
using (WaveStream convertedStream = WaveFormatConversionStream.CreatePcmStream(reader))
{
WaveFileWriter.CreateWaveFile(fileName.Replace("vox", "wav"), convertedStream);
}
fileStream.Close();
openFileDialog openFileDialog = new openFileDialog();
openFileDialog.Filter = "Wave Files (*.wav)|*.wav|All Files (*.*)|*.*";
openFileDialog.FilterIndex = 1;
WaveFileReader reader = new NAudio.Wave.WaveFileReader(dpmFileDestPath);
WaveFormat newFormat = new WaveFormat(8000, 16, 1);
WaveFormatConversionStream str = new WaveFormatConversionStream(newFormat, reader);
try
{
WaveFileWriter.CreateWaveFile("C:\\Konvertierten_Dateien.wav", str);
}
catch (Exception ex)
{
MessageBox.Show(String.Format("{0}", ex.Message));
}
finally
{
str.Close();
}
MessageBox.Show("Konvertieren ist Fertig!");
}

Categories

Resources