public void AudioConvert()
{
FileStream fs = new FileStream(InputFileName, FileMode.Open, FileAccess.Read);
NAudio.Wave.WaveFormat format = new NAudio.Wave.WaveFormat();
NAudio.Wave.WaveStream rawStream = new RawSourceWaveStream(fs, format);
NAudio.Wave.WaveStream wsDATA = WaveFormatConversionStream.CreatePcmStream(rawStream);
WaveStream wsstream = wst.CanConvertPcmToMp3(2, 44100);
.....
}
// Here is the class
public class WaveFormatConversionStreamTests
{
public WaveStream CanConvertPcmToMp3(int channels,int sampleRate)
{
WaveStream ws = CanCreateConversionStream(
new WaveFormat(sampleRate, 16, channels),
new Mp3WaveFormat(sampleRate, channels, 0, 128000/8));
return ws;
}
}
Here, i am trying to convert any audio format to mp3 but my code is throwing exception like "ACMNotPossible" at ConvertPCMToMp3 function call. I am using NAudio 1.6 version dll. Right now i am working on windows 7. Please tell me where i went wrong in this code.
WaveFormatConversionStream is a wrapper around the Windows ACM APIs, so you can only use it to make MP3s if you have an ACM MP3 encoder installed. Windows does not ship with one of these. The easiest way to make MP3s is simply to use LAME.exe. I explain how to do this in C# in this article.
Also, if you are using the alpha of NAudio 1.7 and are on Windows 8 then you might be able to use the MP3 encoder which seems to come with Windows 8 as a Media Foundation Transform. Use the MediaFoundationEncoder (the NAudio WPF demo shows how to do this).
Related
var result = service.Synthesize(
text: text,
accept: "audio/wav",
voice: "en-US_AllisonVoice"
//voice: "en-US_HenryV3Voice"
);
using (FileStream fs = File.Create(#"C:\Users\nkk01\Desktop\voice.wav"))
{
result.Result.WriteTo(fs);
fs.Close();
result.Result.Close();
}
var waveStream = new WaveFileReader(#"C:\Users\nkk01\Desktop\voice.wav");
var waveOut = new WaveOutEvent();
waveOut.Init(waveStream);
Console.WriteLine("Playing");
waveOut.Play();
Console.WriteLine("Finished playing");
Hi this is my current code which is extremely inefficient as it has to save an audio stream to a file to play it. I would like to pass the audio stream directly to my laptop's speaker using the NAudio library. I still have not managed to find a solution. It will be of great help, thanks.
i'm not familiar with naudio but as far as i can see from their git hub repo, the constructor for WaveFileReader also accepts a stream instead of a filename ...
simply try writing everything you have to a memory stream instead of a file... don't forget to reposition the memory stream before you read it ('seek' to offset 0)
I am using System.Speech.Synthesis.SpeechSynthesizer to convert text to speech. And due to Microsoft's anemic documentation (see my link, there's no remarks or code examples) I'm having trouble making heads or tails of the difference between two methods:
SetOutputToAudioStream and SetOutputToWaveStream.
Here's what I have deduced:
SetOutputToAudioStream takes a stream and a SpeechAudioFormatInfo instance that defines the format of the wave file (samples per second, bits per second, audio channels, etc.) and writes the text to the stream.
SetOutputToWaveStream takes just a stream and writes a 16 bit, mono, 22kHz, PCM wave file to the stream. There is no way to pass in SpeechAudioFormatInfo.
My problem is SetOutputToAudioStream doesn't write a valid wave file to the stream. For example I get a InvalidOperationException ("The wave header is corrupt") when passing the stream to System.Media.SoundPlayer. If I write the stream to disk and attempt to play it with WMP I get a "Windows Media Player cannot play the file..." error but the stream written by SetOutputToWaveStream plays properly in both. My theory is that SetOutputToAudioStream is not writing a (valid) header.
Strangely the naming conventions for the SetOutputTo*Blah* is inconsistent. SetOutputToWaveFile takes a SpeechAudioFormatInfo while SetOutputToWaveStream does not.
I need to be able to write a 8kHz, 16-bit, mono wave file to a stream, something that neither SetOutputToAudioStream or SetOutputToWaveStream allow me to do. Does anybody have insight into SpeechSynthesizer and these two methods?
For reference, here's some code:
Stream ret = new MemoryStream();
using (SpeechSynthesizer synth = new SpeechSynthesizer())
{
synth.SelectVoice(voiceName);
synth.SetOutputToWaveStream(ret);
//synth.SetOutputToAudioStream(ret, new SpeechAudioFormatInfo(8000, AudioBitsPerSample.Sixteen, AudioChannel.Mono));
synth.Speak(textToSpeak);
}
Solution:
Many thanks to #Hans Passant, here is the gist of what I'm using now:
Stream ret = new MemoryStream();
using (SpeechSynthesizer synth = new SpeechSynthesizer())
{
var mi = synth.GetType().GetMethod("SetOutputStream", BindingFlags.Instance | BindingFlags.NonPublic);
var fmt = new SpeechAudioFormatInfo(8000, AudioBitsPerSample.Sixteen, AudioChannel.Mono);
mi.Invoke(synth, new object[] { ret, fmt, true, true });
synth.SelectVoice(voiceName);
synth.Speak(textToSpeak);
}
return ret;
For my rough testing it works great, though using reflection is a bit icky it's better than writing the file to disk and opening a stream.
Your code snippet is borked, you're using synth after it is disposed. But that's not the real problem I'm sure. SetOutputToAudioStream produces the raw PCM audio, the 'numbers'. Without a container file format (headers) like what's used in a .wav file. Yes, that cannot be played back with a regular media program.
The missing overload for SetOutputToWaveStream that takes a SpeechAudioFormatInfo is strange. It really does look like an oversight to me, even though that's extremely rare in the .NET framework. There's no compelling reason why it shouldn't work, the underlying SAPI interface does support it. It can be hacked around with reflection to call the private SetOutputStream method. This worked fine when I tested it but I can't vouch for it:
using System.Reflection;
...
using (Stream ret = new MemoryStream())
using (SpeechSynthesizer synth = new SpeechSynthesizer()) {
var mi = synth.GetType().GetMethod("SetOutputStream", BindingFlags.Instance | BindingFlags.NonPublic);
var fmt = new SpeechAudioFormatInfo(8000, AudioBitsPerSample.Eight, AudioChannel.Mono);
mi.Invoke(synth, new object[] { ret, fmt, true, true });
synth.Speak("Greetings from stack overflow");
// Testing code:
using (var fs = new FileStream(#"c:\temp\test.wav", FileMode.Create, FileAccess.Write, FileShare.None)) {
ret.Position = 0;
byte[] buffer = new byte[4096];
for (;;) {
int len = ret.Read(buffer, 0, buffer.Length);
if (len == 0) break;
fs.Write(buffer, 0, len);
}
}
}
If you're uncomfortable with the hack then using Path.GetTempFileName() to temporarily stream it to a file will certainly work.
At first, I want to say that I had read all topics related to my problem here, on stackoverflow (and of course googled), but those research provided no solution to my problem.
I'm writing app for Windows Phone and I need to play two sounds simultaneously, but this code doesn't work, because there is slight, but noticeable dealy between two sounds, and there must be NO perceptible delay in my project.
Stream s1 = TitleContainer.OpenStream("C.wav");
Stream s2 = TitleContainer.OpenStream("C1.wav");
SoundEffectInstance sci = sc.CreateInstance();
SoundEffectInstance sci1 = sc1.CreateInstance();
sci.Play();
sci1.Play();
I also tried to perform a simple mix of two wav files, but it doesn't work for a reason that I don't know. (ArgumentException - Ensure that the specified stream contains valid PCM mono or stereo wave data. - is thrown, when calling SoundEffect.FromStream(WAVEFile.Mix(s1, s2));
public static Stream Mix(Stream in1,Stream in2)
{
BinaryWriter bw;
bw = new BinaryWriter(new MemoryStream());
byte[] header = new byte[44];
in1.Read(header, 0, 44);
bw.Write(header);
in2.Seek(44, SeekOrigin.Begin);
BinaryReader r1 = new BinaryReader(in1);
BinaryReader r2 = new BinaryReader(in2);
while (in1.Position != in1.Length)
{
bw.Write((short)(r1.ReadInt16() + r2.ReadInt16()));
}
r1.Dispose();
r2.Dispose();
bw.BaseStream.Seek(0, SeekOrigin.Begin);
return bw.BaseStream;
}
Stream s1 = TitleContainer.OpenStream("C.wav");
Stream s2 = TitleContainer.OpenStream("C1.wav");
s3 = SoundEffect.FromStream(WAVEFile.Mix(s1, s2));
So, does anyone know how to play two sounds at the time?
So your first solution SHOULD work. I have another solution that is very similar with a twist that I KNOW works.
static Stream stream1 = TitleContainer.OpenStream("soundeffect.wav");
static SoundEffect sfx = SoundEffect.FromStream(stream1);
static SoundEffectInstance soundEffect = sfx.CreateInstance();
public void playSound(){
FrameworkDispatcher.Update();
soundEffect.Play();
}
The reason your second solution didnt work is because there are very specific file formats that the windows phone can play.
List of supported formats
http://msdn.microsoft.com/en-us/library/windowsphone/develop/ff462087(v=vs.105).aspx
Reference for this code is my blog
http://www.anthonyrussell.info/postpage.php?name=60
Edit
You can see the above solution working here
http://www.windowsphone.com/en-us/store/app/xylophone/fe4e0fed-1130-e011-854c-00237de2db9e
Edit#2
In response to the comment below that this code doesn't work I have also posted a working, published app on my blog that implements this very code. It's called Xylophone, it's free and you can download the code here at the bottom of the page.
http://anthonyrussell.info/postpage.php?name=60
I've converted a double array output[] to a wav file using NAudio. The file plays ok in VLC player and Windows Media Player, but when I try to open it in Winamp, or access it in Matlab using wavread() I fail.. (in Matlab I get the error: " Invalid Wave File. Reason: Incorrect chunk size information in WAV file." , which pretty obviously means something's wrong with the header). Any ideas on how to solve this? Here's the code for converting the array to a WAV:
float[] floatOutput = output.Select(s => (float)s).ToArray();
WaveFormat waveFormat = new WaveFormat(16000, 16, 1);
WaveFileWriter writer = new WaveFileWriter("C:\\track1.wav", waveFormat);
writer.WriteSamples(floatOutput, 0, floatOutput.Length);
You must dispose your WaveFileWriter so it can properly fix up the WAV file header. A using statement is the best way to do this:
float[] floatOutput = output.Select(s => (float)s).ToArray();
WaveFormat waveFormat = new WaveFormat(16000, 16, 1);
using (WaveFileWriter writer = new WaveFileWriter("C:\\track1.wav", waveFormat))
{
writer.WriteSamples(floatOutput, 0, floatOutput.Length);
}
I have to Compress an Audio WAV with the best codec possible using NAudio.
I use WaveFormatConversionStream, but I always get this error : "AcmNotPossible calling acmStreamOpen"
I had read a lot of answer about this error , but I don't find the solution.
Here's my code,Where Am I wrong ?
All help would be nice and welcome :)
private void InvokeOnNewAudioData(byte[] data, AudioFormat audioFormat)
{
WaveFormat waveFormat = new WaveFormat(audioFormat.NumberSamplesPerSec, audioFormat.NumberBitsPerSample, audioFormat.NumberChannels);
WaveFormat targetFormat = WaveFormat.CreateCustomFormat(WaveFormatEncoding.Vorbis1,
22000, //SampleRate
audioFormat.NumberChannels, //Channels
48000, //Average Bytes per Second
2, //Block Align
16); //Bits per Sample
using (MemoryStream dataStream = new MemoryStream(data))
{
using (WaveStream inputStream = new RawSourceWaveStream(dataStream, waveFormat))
{
try
{
using (WaveFormatConversionStream converter = new WaveFormatConversionStream(targetFormat, inputStream))
{
}
}
catch (Exception)
{
throw;
}
}
}
}
This means that there is no ACM codec on your system that can perform the requested conversion. You can use the NAudioDemo app that comes with NAudio to examine all the ACM codecs you have installed on your system and their supported input and output formats. Windows certainly doesn't come with a Vorbis ACM codec, which is probably why your code doesn't work. Even if you had installed a Vorbis ACM codec, you need to get the WaveFormat exactly right or you will get the ACM not possible error.
You'd probably be better off trying to use the the NAudio support that comes with NVorbis in any case.