Playing from wav stream - c#

var result = service.Synthesize(
text: text,
accept: "audio/wav",
voice: "en-US_AllisonVoice"
//voice: "en-US_HenryV3Voice"
);
using (FileStream fs = File.Create(#"C:\Users\nkk01\Desktop\voice.wav"))
{
result.Result.WriteTo(fs);
fs.Close();
result.Result.Close();
}
var waveStream = new WaveFileReader(#"C:\Users\nkk01\Desktop\voice.wav");
var waveOut = new WaveOutEvent();
waveOut.Init(waveStream);
Console.WriteLine("Playing");
waveOut.Play();
Console.WriteLine("Finished playing");
Hi this is my current code which is extremely inefficient as it has to save an audio stream to a file to play it. I would like to pass the audio stream directly to my laptop's speaker using the NAudio library. I still have not managed to find a solution. It will be of great help, thanks.

i'm not familiar with naudio but as far as i can see from their git hub repo, the constructor for WaveFileReader also accepts a stream instead of a filename ...
simply try writing everything you have to a memory stream instead of a file... don't forget to reposition the memory stream before you read it ('seek' to offset 0)

Related

Can .NET audio streams be split by L/R channel? [duplicate]

I am using System.Speech.Synthesis.SpeechSynthesizer to convert text to speech. And due to Microsoft's anemic documentation (see my link, there's no remarks or code examples) I'm having trouble making heads or tails of the difference between two methods:
SetOutputToAudioStream and SetOutputToWaveStream.
Here's what I have deduced:
SetOutputToAudioStream takes a stream and a SpeechAudioFormatInfo instance that defines the format of the wave file (samples per second, bits per second, audio channels, etc.) and writes the text to the stream.
SetOutputToWaveStream takes just a stream and writes a 16 bit, mono, 22kHz, PCM wave file to the stream. There is no way to pass in SpeechAudioFormatInfo.
My problem is SetOutputToAudioStream doesn't write a valid wave file to the stream. For example I get a InvalidOperationException ("The wave header is corrupt") when passing the stream to System.Media.SoundPlayer. If I write the stream to disk and attempt to play it with WMP I get a "Windows Media Player cannot play the file..." error but the stream written by SetOutputToWaveStream plays properly in both. My theory is that SetOutputToAudioStream is not writing a (valid) header.
Strangely the naming conventions for the SetOutputTo*Blah* is inconsistent. SetOutputToWaveFile takes a SpeechAudioFormatInfo while SetOutputToWaveStream does not.
I need to be able to write a 8kHz, 16-bit, mono wave file to a stream, something that neither SetOutputToAudioStream or SetOutputToWaveStream allow me to do. Does anybody have insight into SpeechSynthesizer and these two methods?
For reference, here's some code:
Stream ret = new MemoryStream();
using (SpeechSynthesizer synth = new SpeechSynthesizer())
{
synth.SelectVoice(voiceName);
synth.SetOutputToWaveStream(ret);
//synth.SetOutputToAudioStream(ret, new SpeechAudioFormatInfo(8000, AudioBitsPerSample.Sixteen, AudioChannel.Mono));
synth.Speak(textToSpeak);
}
Solution:
Many thanks to #Hans Passant, here is the gist of what I'm using now:
Stream ret = new MemoryStream();
using (SpeechSynthesizer synth = new SpeechSynthesizer())
{
var mi = synth.GetType().GetMethod("SetOutputStream", BindingFlags.Instance | BindingFlags.NonPublic);
var fmt = new SpeechAudioFormatInfo(8000, AudioBitsPerSample.Sixteen, AudioChannel.Mono);
mi.Invoke(synth, new object[] { ret, fmt, true, true });
synth.SelectVoice(voiceName);
synth.Speak(textToSpeak);
}
return ret;
For my rough testing it works great, though using reflection is a bit icky it's better than writing the file to disk and opening a stream.
Your code snippet is borked, you're using synth after it is disposed. But that's not the real problem I'm sure. SetOutputToAudioStream produces the raw PCM audio, the 'numbers'. Without a container file format (headers) like what's used in a .wav file. Yes, that cannot be played back with a regular media program.
The missing overload for SetOutputToWaveStream that takes a SpeechAudioFormatInfo is strange. It really does look like an oversight to me, even though that's extremely rare in the .NET framework. There's no compelling reason why it shouldn't work, the underlying SAPI interface does support it. It can be hacked around with reflection to call the private SetOutputStream method. This worked fine when I tested it but I can't vouch for it:
using System.Reflection;
...
using (Stream ret = new MemoryStream())
using (SpeechSynthesizer synth = new SpeechSynthesizer()) {
var mi = synth.GetType().GetMethod("SetOutputStream", BindingFlags.Instance | BindingFlags.NonPublic);
var fmt = new SpeechAudioFormatInfo(8000, AudioBitsPerSample.Eight, AudioChannel.Mono);
mi.Invoke(synth, new object[] { ret, fmt, true, true });
synth.Speak("Greetings from stack overflow");
// Testing code:
using (var fs = new FileStream(#"c:\temp\test.wav", FileMode.Create, FileAccess.Write, FileShare.None)) {
ret.Position = 0;
byte[] buffer = new byte[4096];
for (;;) {
int len = ret.Read(buffer, 0, buffer.Length);
if (len == 0) break;
fs.Write(buffer, 0, len);
}
}
}
If you're uncomfortable with the hack then using Path.GetTempFileName() to temporarily stream it to a file will certainly work.

Append WAV Header in NAudio

I am trying to convert audio MP3 files to WAV with a standard rate (48 KHz, 16 bits, 2 channels) by opening with "MediaFoundationReaderRT" and specifying the standard settings in it.
After the file is converted to PCM WAV, when I try to play the WAV file, it gives corrupt output:
Option 1 -
WaveStream activeStream = new MediaFoundationReaderRT([Open "MyFile.mp3"]);
WaveChannel32 waveformInputStream = new WaveChannel32(activeStream);
waveformInputStream.Sample += inputStream_Sample;
I noticed that if I read the audio data into a memory stream (wherein it appends the WAV header via "WaveFileWriter"), then things work fine:
Option 2 -
WaveStream activeStream = new MediaFoundationReaderRT([Open "MyFile.mp3"]);
MemoryStream memStr = new MemoryStream();
byte[] audioData = new byte[activeStream.Length];
int bytesRead = activeStream.Read(audioData, 0, audioData.Length);
memStr.Write(audioData, 0, bytesRead);
WaveFileWriter.CreateWaveFile(memStr, audioData);
RawSourceWaveStream rawSrcWavStr = new RawSourceWaveStream(activeStream,
new WaveFormat(48000, 16, 2));
WaveChannel32 waveformInputStream = new WaveChannel32(rawSrcWavStr);
waveformInputStream.Sample += inputStream_Sample;
However, reading the whole audio into memory is time-consuming. Hence I am looking at "Option 1" as noted above.
I am trying to figure out as to what exactly is the issue. Is it that the WAV header is missing which is causing the problem?
Is there a way in "Option 1" where I can append the WAV header to the "current playing" sample data, instead of converting the whole audio data into memory stream and then appending the header?
I'm not quite sure why you need either of those options. Converting an MP3 file to WAV is quite simple with NAudio:
using(var reader = new MediaFoundationReader("input.mp3"))
{
WaveFileWriter.CreateWaveFile("output.wav", reader);
}
And if you don't need to create a WAV file, then your job is already done - MediaFoundationReader already returns PCM from it's Read method so you can play it directly.

Can I get a GZipStream for a file without writing to intermediate temporary storage?

Can I get a GZipStream for a file on disk without writing the entire compressed content to temporary storage? I'm currently using a temporary file on disk in order to avoid possible memory exhaustion using MemoryStream on very large files (this is working fine).
public void UploadFile(string filename)
{
using (var temporaryFileStream = File.Open("tempfile.tmp", FileMode.CreateNew, FileAccess.ReadWrite))
{
using (var fileStream = File.OpenRead(filename))
using (var compressedStream = new GZipStream(temporaryFileStream, CompressionMode.Compress, true))
{
fileStream.CopyTo(compressedStream);
}
temporaryFileStream.Position = 0;
Uploader.Upload(temporaryFileStream);
}
}
What I'd like to do is eliminate the temporary storage by creating GZipStream, and have it read from the original file only as the Uploader class requests bytes from it. Is such a thing possible? How might such an implementation be structured?
Note that Upload is a static method with signature static void Upload(Stream stream).
Edit: The full code is here if it's useful. I hope I've included all the relevant context in my sample above however.
Yes, this is possible, but not easily with any of the standard .NET stream classes. When I needed to do something like this, I created a new type of stream.
It's basically a circular buffer that allows one producer (writer) and one consumer (reader). It's pretty easy to use. Let me whip up an example. In the meantime, you can adapt the example in the article.
Later: Here's an example that should come close to what you're asking for.
using (var pcStream = new ProducerConsumerStream(BufferSize))
{
// start upload in a thread
var uploadThread = new Thread(UploadThreadProc(pcStream));
uploadThread.Start();
// Open the input file and attach the gzip stream to the pcStream
using (var inputFile = File.OpenRead("inputFilename"))
{
// create gzip stream
using (var gz = new GZipStream(pcStream, CompressionMode.Compress, true))
{
var bytesRead = 0;
var buff = new byte[65536]; // 64K buffer
while ((bytesRead = inputFile.Read(buff, 0, buff.Length)) != 0)
{
gz.Write(buff, 0, bytesRead);
}
}
}
// The entire file has been compressed and copied to the buffer.
// Mark the stream as "input complete".
pcStream.CompleteAdding();
// wait for the upload thread to complete.
uploadThread.Join();
// It's very important that you don't close the pcStream before
// the uploader is done!
}
The upload thread should be pretty simple:
void UploadThreadProc(object state)
{
var pcStream = (ProducerConsumerStream)state;
Uploader.Upload(pcStream);
}
You could, of course, put the producer on a background thread and have the upload be done on the main thread. Or have them both on background threads. I'm not familiar with the semantics of your uploader, so I'll leave that decision to you.

Playing sounds simultaneously on windows phone

At first, I want to say that I had read all topics related to my problem here, on stackoverflow (and of course googled), but those research provided no solution to my problem.
I'm writing app for Windows Phone and I need to play two sounds simultaneously, but this code doesn't work, because there is slight, but noticeable dealy between two sounds, and there must be NO perceptible delay in my project.
Stream s1 = TitleContainer.OpenStream("C.wav");
Stream s2 = TitleContainer.OpenStream("C1.wav");
SoundEffectInstance sci = sc.CreateInstance();
SoundEffectInstance sci1 = sc1.CreateInstance();
sci.Play();
sci1.Play();
I also tried to perform a simple mix of two wav files, but it doesn't work for a reason that I don't know. (ArgumentException - Ensure that the specified stream contains valid PCM mono or stereo wave data. - is thrown, when calling SoundEffect.FromStream(WAVEFile.Mix(s1, s2));
public static Stream Mix(Stream in1,Stream in2)
{
BinaryWriter bw;
bw = new BinaryWriter(new MemoryStream());
byte[] header = new byte[44];
in1.Read(header, 0, 44);
bw.Write(header);
in2.Seek(44, SeekOrigin.Begin);
BinaryReader r1 = new BinaryReader(in1);
BinaryReader r2 = new BinaryReader(in2);
while (in1.Position != in1.Length)
{
bw.Write((short)(r1.ReadInt16() + r2.ReadInt16()));
}
r1.Dispose();
r2.Dispose();
bw.BaseStream.Seek(0, SeekOrigin.Begin);
return bw.BaseStream;
}
Stream s1 = TitleContainer.OpenStream("C.wav");
Stream s2 = TitleContainer.OpenStream("C1.wav");
s3 = SoundEffect.FromStream(WAVEFile.Mix(s1, s2));
So, does anyone know how to play two sounds at the time?
So your first solution SHOULD work. I have another solution that is very similar with a twist that I KNOW works.
static Stream stream1 = TitleContainer.OpenStream("soundeffect.wav");
static SoundEffect sfx = SoundEffect.FromStream(stream1);
static SoundEffectInstance soundEffect = sfx.CreateInstance();
public void playSound(){
FrameworkDispatcher.Update();
soundEffect.Play();
}
The reason your second solution didnt work is because there are very specific file formats that the windows phone can play.
List of supported formats
http://msdn.microsoft.com/en-us/library/windowsphone/develop/ff462087(v=vs.105).aspx
Reference for this code is my blog
http://www.anthonyrussell.info/postpage.php?name=60
Edit
You can see the above solution working here
http://www.windowsphone.com/en-us/store/app/xylophone/fe4e0fed-1130-e011-854c-00237de2db9e
Edit#2
In response to the comment below that this code doesn't work I have also posted a working, published app on my blog that implements this very code. It's called Xylophone, it's free and you can download the code here at the bottom of the page.
http://anthonyrussell.info/postpage.php?name=60

Convert MP4 to Ogg with C#

Does anyone know a simple way of converting a mp4 file to an ogg file?
Have to do it, but don't have much knowlegde about it, and all I can find are programs, not examples or libraries.
Thanks in advance
I would recommend dispatching this to FFMPEG - http://www.ffmpeg.org/ - using a Process and command line arguments. You can redirect I/O if you need to (e.g. logging). Just do something like process.WaitForExit() after you've started it. You could do this on a background thread (BackgroundWorker, ThreadPool, etc...) if you need to not block the UI.
Extending Chad's answer I used the NReco.VideoConverter which is a helpful wrapper for FFMPEG. My code to convert a MP4 to OGG is as follows.
1.Save the file to a temporary file in local storage.
var path = Path.GetTempPath() + name;
using (var file = File.Create(path))
{
stream.Seek(0, SeekOrigin.Begin);
await stream.CopyToAsync(file);
file.Close();
return path;
}
2.Now use the video converter to convert the file, simple!
var output = new MemoryStream();
var ffMpeg = new FFMpegConverter();
ffMpeg.ConvertMedia(filePath, output, Format.ogg);
output.Seek(0, SeekOrigin.Begin);
return output;
static void Mp4ToOgg(string fileName)
{
DsReader dr = new DsReader(fileName);
if (dr.HasAudio)
{
string waveFile = fileName + ".wav";
WaveWriter ww = new WaveWriter(File.Create(waveFile),
AudioCompressionManager.FormatBytes(dr.ReadFormat()));
ww.WriteData(dr.ReadData());
ww.Close();
dr.Close();
try
{
Sox.Convert(#"sox.exe", waveFile, waveFile + ".ogg", SoxAudioFileType.Ogg);
}
catch (SoxException ex)
{
throw;
}
}
}
from How to converts a mp4 file to an ogg file

Categories

Resources