I'm experimenting on how to play mp3 using Naudio. My simple app has one windows form and one button to play/pause the music. The app however has two major problem:
While it was intended that if the music is playing and the play button is pressed, the app should stop playing. Instead when the button is re-pressed, the app restart the music and then (sometime) throw an exception
If the button is pressed two or three times (and without any delay) ,the app throw a NAudio.MmException (Message=InvalidParameter calling acmStreamClose)
Can someone tell me what's wrong with my code? Below is my code:
using System;
using System.Windows.Forms;
namespace NaudioTesting
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private NAudio.Wave.BlockAlignReductionStream stream = null;
private NAudio.Wave.DirectSoundOut output = null;
public void LoadFile(string filePath)
{
DisposeWave();
if (filePath.EndsWith(".mp3"))
{
NAudio.Wave.WaveStream pcm =
NAudio.Wave.WaveFormatConversionStream.CreatePcmStream(new NAudio.Wave.Mp3FileReader(filePath));
stream = new NAudio.Wave.BlockAlignReductionStream(pcm);
}
else if (filePath.EndsWith(".wav"))
{
NAudio.Wave.WaveStream pcm = new NAudio.Wave.WaveChannel32(new NAudio.Wave.WaveFileReader(filePath));
stream = new NAudio.Wave.BlockAlignReductionStream(pcm);
}
else throw new InvalidOperationException("Not a correct audio file type.");
output = new NAudio.Wave.DirectSoundOut();
output.Init(stream);
output.Play();
}
private void playPauseButton_Click(object sender, EventArgs e)
{
string filePath = "GetLoud.mp3";
LoadFile(filePath);
if (output != null)
{
if (output.PlaybackState == NAudio.Wave.PlaybackState.Playing) output.Pause();
else if (output.PlaybackState == NAudio.Wave.PlaybackState.Paused) output.Play();
}
}
private void DisposeWave()
{
try
{
if (output != null)
{
if (output.PlaybackState == NAudio.Wave.PlaybackState.Playing) output.Stop();
output.Dispose();
output = null;
}
if (stream != null)
{
stream.Dispose();
stream = null;
}
}
catch (NAudio.MmException)
{
throw;
}
}
private void Form1_FormClosing(object sender, FormClosingEventArgs e)
{
DisposeWave();
}
}
}
Looking at the DirectSoundOut source, the implementation for Play and Pause doesn't support resuming. Namely, what happens to you is exactly what it should. Calling play will always start from begining of the stream.
You should use WaveOut instead. It supports resuming by calling Play again, just like what you have in your code.
output = new NAudio.Wave.WaveOut();
Related
I am trying to record the speaker sound to a wave file using NAudio's WasapiLoopbackCapture by writing the stream of bytes available. The WasapiLoopbackCapture.DataAvailable BytesRecorded will be 0 is there is no sound. however in my case i am getting bytecount in BytesRecorded even though the speakers are silent. could you please let me know whats wrong here.
class CallResponse
{
private WaveFileWriter _writer;
private WasapiLoopbackCapture _waveIn;
private string _inFile;
private string _inFileCompressed;
private int _duration;
public bool _isRecording;
public bool _speechDetected;
public CallResponse()
{
_inFile = #"C:\Naresh\test.wav";
_inFileCompressed = #"C:\Naresh\test16Hz.wav";
_waveIn = new WasapiLoopbackCapture();
_waveIn.DataAvailable += (s, e) =>
{
Console.WriteLine(e.BytesRecorded);
_writer.Write(e.Buffer, 0, e.BytesRecorded);
if (_writer.Position > _waveIn.WaveFormat.AverageBytesPerSecond * _duration)
{
Console.Write("\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\bRecording stopped...");
_waveIn.StopRecording();
}
};
_waveIn.RecordingStopped += (s, e) =>
{
if (_writer != null)
{
_writer.Close();
_writer.Dispose();
_writer = null;
}
Console.Write("\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\bCompressing Audio...");
using (var reader = new AudioFileReader(_inFile))
{
var resampler = new WdlResamplingSampleProvider(reader, 16000);
WaveFileWriter.CreateWaveFile16(_inFileCompressed, resampler);
}
_isRecording = false;
};
}
public void DisposeObjects()
{
if (_waveIn != null)
{
_waveIn.Dispose();
_waveIn = null;
}
}
public void StartRecording(int duration = 5)
{
_writer = new WaveFileWriter(_inFile, _waveIn.WaveFormat);
this._duration = duration;
_speechDetected = false;
_isRecording = true;
Console.WriteLine("\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\bRecording....");
_waveIn.StartRecording();
}
}
if something is playing audio, then WasapiLoopbackCapture will capture that audio, even if it contains silence. So there's nothing particularly wrong or surprising that you are getting non-zero BytesRecorded values. In fact, if no applications are sending audio to the device being captured, then what typically happens is that you won't get any DataAvailable callbacks at all.
I'm working on a text to speech demo, in which I'm using speech synthesizer.
My problem is when I click on play button the page is loading continuously.
It does not stop even if the speech is finished. Also in my demo pause and resume are not working.
I also tried to use spVoice interface for text to speech, but in this demo also pause and resume are not working.
Demo Using Speech synthesizer -
SpeechSynthesizer spRead;
protected void Page_Load(object sender, EventArgs e)
{
if (!IsPostBack)
{
// Creating new object of SpeechSynthesizer.
spRead = new SpeechSynthesizer();
}
} // Page_Load
protected void btnPlay_Click(object sender, EventArgs e)
{
// Get the content data as per the content id
_contentData = new ContentFormData(ContentManager.GetContentData(Page.Database, ContentId, Page.IsLive));
// Get the text after trim
_speechText = WebUtility.HtmlDecode(_contentData.Content.Text1.Trim());
// If Speech Text is not null
// then check the button class, if cssclass is play change it to pause
and call speak method.
if (_speechText != null && !string.IsNullOrEmpty(_speechText))
{
// if button is play buttton
// then call play method and speech the text
if (btnPlay.CssClass == "button-play")
{
btnPlay.CssClass = btnPlay.CssClass.Replace("button-play",
"button-pause");
// creating the object of SpeechSynthesizer class
spRead = new SpeechSynthesizer();
spRead.SpeakAsync(_speechText);
spRead.SpeakCompleted += new
EventHandler<SpeakCompletedEventArgs>(spRead_SpeakCompleted);
}
// If button class is pause
// then change it to continue and call pause method.
else if (btnPlay.CssClass == "button-pause")
{
btnPlay.CssClass = btnPlay.CssClass.Replace("button-pause",
"button-continue");
if (spRead != null)
{
// Check the state of spRead, and call pause method.
if (spRead.State == SynthesizerState.Speaking)
{
spRead.Pause();
}
}
btnPlayFromStart.Enabled = true;
}
// If button class is continue
// then change it to pause and call resume method.
else if (btnPlay.CssClass == "button-continue")
{
btnPlay.CssClass = btnPlay.CssClass.Replace("button-continue",
"button-pause");
if (spRead != null)
{
// Check the state of spRead, and call resume method.
if (spRead.State == SynthesizerState.Paused)
{
spRead.Resume();
}
}
btnPlayFromStart.Enabled = false;
}
}
}
private void spRead_SpeakCompleted(object sender, SpeakCompletedEventArgs e)
{
// If Spread is not null
// then dispose the spread after the speak is completed
// else do nothing
if (spRead != null)
{
spRead.Dispose();
}
else
{
// do nothing
}
} // spRead_SpeakCompleted
Demo Using SpVoice -
SpVoice voice;
protected void Page_Load(object sender, EventArgs e)
{
_contentData = new
ContentFormData(ContentManager.GetContentData(Page.Database, ContentId,
Page.IsLive));
_speechText = WebUtility.HtmlDecode(_contentData.Content.Text1.Trim());
} // Page_Load
protected void btnPlay_Click(object sender, EventArgs e)
{
voice = new SpVoice();
if (btnPlay.CssClass == "button-play")
{
voice.Speak(_speechText, SpeechVoiceSpeakFlags.SVSFlagsAsync);
btnPlay.CssClass = btnPlay.CssClass.Replace("button-play", "button-
pause");
}
else if (btnPlay.CssClass == "button-pause")
{
voice.Pause();
btnPlay.CssClass = btnPlay.CssClass.Replace("button-pause", "button-
continue");
}
else if (btnPlay.CssClass == "button-continue")
{
voice.Resume();
btnPlay.CssClass = btnPlay.CssClass.Replace("button-continue",
"button-play");
}
}
Solved the issue by using handeler, to stop the postback.
Stored the voice object in session and in pause and resume get the voice object from session.
I've been at this for a while, but made no progress. So this is my last resort!
I am trying to send the system-audio (the audio I hear in my headphones) to Skype (making the persons in my call hear what I hear basically). And I thought I would do this using the Skype4comlib and naudio.
What I've done is to create a class which uses the WasapiLoopbackCapture and WaveFileWriter to write temporary data to a .wav file, and redirect audio using the SkypeSystemAudio.set_InputDevice method. But when I'm talking to somebody and I try to start recording, the person doesn't hear me anymore. I just go completely quiet and no sound is being played to the person.
I thought it would be best if I posted the whole class since it's easier to understand everything.
public class SkypeSystemAudio
{
public NAudio.Wave.WasapiLoopbackCapture capture;
NAudio.CoreAudioApi.MMDevice device;
NAudio.Wave.WaveFileWriter writer;
private Call CurrentCall = null;
private Skype SkypeApplet;
private const int SkypeProtocol = 9;
private bool IsRecording = false;
public string tempfilepath = System.IO.Directory.GetCurrentDirectory() + #"\temp.wav";
#region Public
public void Initialize()
{
device = NAudio.Wave.WasapiLoopbackCapture.GetDefaultLoopbackCaptureDevice();
Init();
}
public void Initialize(NAudio.CoreAudioApi.MMDevice device)
{
this.device = device;
Init();
}
public void StartRecording()
{
capture.StartRecording();
if (CurrentCall != null)
{
CurrentCall.set_OutputDevice(TCallIoDeviceType.callIoDeviceTypeFile, tempfilepath);
IsRecording = true;
}
}
public void StopRecording()
{
capture.StopRecording();
if (CurrentCall != null)
{
CurrentCall.set_OutputDevice(TCallIoDeviceType.callIoDeviceTypeFile, "");
}
}
#endregion
private void Init()
{
capture = new WasapiLoopbackCapture(device);
capture.ShareMode = NAudio.CoreAudioApi.AudioClientShareMode.Shared;
capture.DataAvailable += capture_DataAvailable;
capture.RecordingStopped += capture_RecordingStopped;
WaveFormat format = new WaveFormat(16000, 1); // skype wants 16 Bit samples, 16khz, mono WAV file
//tried using the standard waveformat in the device object too. Didn't work though.
writer = new WaveFileWriter(tempfilepath, format );
SkypeApplet = new Skype();
SkypeApplet.Attach(SkypeProtocol, true);
SkypeApplet.CallStatus += SkypeApplet_CallStatus;
}
void SkypeApplet_CallStatus(Call pCall, TCallStatus Status)
{
if (Status == TCallStatus.clsRinging)
{
CurrentCall = pCall;
pCall.Answer();
}
}
void capture_DataAvailable(object sender, WaveInEventArgs e)
{
if (writer != null)
writer.Write(e.Buffer, 0, e.BytesRecorded);
}
void capture_RecordingStopped(object sender, StoppedEventArgs e)
{
IsRecording = false;
}
}
Does anyone know why this isn't working? I have no clues anymore what to do next.
Any help will greatly appreciated!
I have actually done some something similar, but not using Skype4COM.
What I did was using "Virtual Cables" just like Sebastian L suggested, this way you can control whats going in and out off skype, the downside is that you need to install the virtual cables and configure Skype to use them.
The cables will appear in audio devices as standard input/output.
I have used these cables VAC and VB cable
Hope it helps.
I want to make an ap that turn on the flash when i press the On button, and turn off when I press the Off Button. This is my code :
protected AudioVideoCaptureDevice Device { get; set; }
private async void Button_Click_TurnOn(object sender, RoutedEventArgs e)
{
var sensorLocation = CameraSensorLocation.Back;
try
{
// get the AudioViceoCaptureDevice
var avDevice = await AudioVideoCaptureDevice.OpenAsync(sensorLocation,
AudioVideoCaptureDevice.GetAvailableCaptureResolutions(sensorLocation).First());
// turn flashlight on
var supportedCameraModes = AudioVideoCaptureDevice
.GetSupportedPropertyValues(sensorLocation, KnownCameraAudioVideoProperties.VideoTorchMode);
if (supportedCameraModes.ToList().Contains((UInt32)VideoTorchMode.On))
{
avDevice.SetProperty(KnownCameraAudioVideoProperties.VideoTorchMode, VideoTorchMode.On);
// set flash power to maxinum
avDevice.SetProperty(KnownCameraAudioVideoProperties.VideoTorchPower,
AudioVideoCaptureDevice.GetSupportedPropertyRange(sensorLocation, KnownCameraAudioVideoProperties.VideoTorchPower).Max);
}
else
{
//ShowWhiteScreenInsteadOfCameraTorch();
}
}
catch (Exception ex)
{
// Flashlight isn't supported on this device, instead show a White Screen as the flash light
// ShowWhiteScreenInsteadOfCameraTorch();
}
}
private void Button_Click_TurnOff(object sender, RoutedEventArgs e)
{
var sensorLocation = CameraSensorLocation.Back;
try
{
// turn flashlight on
var supportedCameraModes = AudioVideoCaptureDevice
.GetSupportedPropertyValues(sensorLocation, KnownCameraAudioVideoProperties.VideoTorchMode);
if (this.Device != null && supportedCameraModes.ToList().Contains((UInt32)VideoTorchMode.Off))
{
this.Device.SetProperty(KnownCameraAudioVideoProperties.VideoTorchMode, VideoTorchMode.Off);
}
else
{
//turnWhiteScreen(false);
}
}
catch (Exception ex)
{
// Flashlight isn't supported on this device, instead show a White Screen as the flash light
//turnWhiteScreen(false);
}
}
I copied it from another question at stackoverflow, but I dont know why this code doesnt work for me. Tested on Lumia 820.
Please help me, thank you very much :)
async private void FlashlightOn_Click(object sender, RoutedEventArgs e)
{
// turn flashlight on
CameraSensorLocation location = CameraSensorLocation.Back;
if (this.audioCaptureDevice == null)
{
audioCaptureDevice = await AudioVideoCaptureDevice.OpenAsync(location,
AudioVideoCaptureDevice.GetAvailableCaptureResolutions(location).First());
}
FlashOn(location, VideoTorchMode.On);
}
private void FlashlightOff_Click(object sender, RoutedEventAgrs e)
{
// turn flashlight off
var sensorLocation = CameraSensorLocation.Back;
FlashOn(sensorLocation, VideoTorchMode.Off);
}
public bool FlashOn(CameraSensorLocation location, VideoTorchMode mode)
{
// turn flashlight on/off
var supportedCameraModes = AudioVideoCaptureDevice
.GetSupportedPropertyValues(location, KnownCameraAudioVideoProperties.VideoTorchMode);
if ((audioCaptureDevice != null) && (supportedCameraModes.ToList().Contains((UInt32)mode)))
{
audioCaptureDevice.SetProperty(KnownCameraAudioVideoProperties.VideoTorchMode, mode);
return true;
}
return false;
}
I am able to capture system audio which is generated by speaker with the help of WasapiLoopbackCapture (naudio). but the problem is it capture wav file and size of wav file very large (almost 10 to 15 MB/Min). I have to capture 2-3 hour audio and this will be too high.
I m looking for a solution that will convert wav stream which is capture by WasapiLoopbackCapture convert to MP3 and then save this to disk. I try a loat with LAME.exe or other solution but not get success. Any working code.
Here is My Code:
private void button1_Click(object sender, EventArgs e){
LoopbackRecorder obj = new LoopbackRecorder();
string a = textBox1.Text;
obj.StartRecording(#"e:\aman.mp3");
}
public class LoopbackRecorder
{
private IWaveIn _waveIn;
private Mp3WaveFormat _mp3format;
private WaveFormat _wavFormat;
private WaveFileWriter _writer;
private bool _isRecording = false;
/// <summary>
/// Constructor
/// </summary>
public LoopbackRecorder()
{
}
/// <summary>
/// Starts the recording.
/// </summary>
/// <param name="fileName"></param>
public void StartRecording(string fileName)
{
// If we are currently record then go ahead and exit out.
if (_isRecording == true)
{
return;
}
_fileName = fileName;
_waveIn = new WasapiLoopbackCapture();
// _waveIn.WaveFormat = new WaveFormat(16000, 16 , 2);
_writer = new WaveFileWriter(fileName, _waveIn.WaveFormat);
_waveIn.DataAvailable += OnDataAvailable;
// _waveIn.RecordingStopped += OnRecordingStopped;
_waveIn.StartRecording();
_isRecording = true;
}
private void OnDataAvailable(object sender, WaveInEventArgs waveInEventArgs)
{
if (_writer == null)
{
_writer = new WaveFileWriter(#"e:\aman.mp3", _waveIn.WaveFormat);
}
_writer.Write(waveInEventArgs.Buffer, 0, waveInEventArgs.BytesRecorded);
byte[] by= Float32toInt16(waveInEventArgs.Buffer, 0, waveInEventArgs.BytesRecorded);
}
private string _fileName = "";
/// <summary>
/// The name of the file that was set when StartRecording was called. E.g. the current file being written to.
/// </summary>
public string FileName
{
get
{
return _fileName;
}
}
}
Here's an example of using NAudio.Lame (in a console application) to capture data from sound card loopback and write direct to an MP3 file:
using System;
using NAudio.Lame;
using NAudio.Wave;
namespace MP3Rec
{
class Program
{
static LameMP3FileWriter wri;
static bool stopped = false;
static void Main(string[] args)
{
// Start recording from loopback
IWaveIn waveIn = new WasapiLoopbackCapture();
waveIn.DataAvailable += waveIn_DataAvailable;
waveIn.RecordingStopped += waveIn_RecordingStopped;
// Setup MP3 writer to output at 32kbit/sec (~2 minutes per MB)
wri = new LameMP3FileWriter( #"C:\temp\test_output.mp3", waveIn.WaveFormat, 32 );
waveIn.StartRecording();
stopped = false;
// Keep recording until Escape key pressed
while (!stopped)
{
if (Console.KeyAvailable)
{
var key = Console.ReadKey(true);
if (key != null && key.Key == ConsoleKey.Escape)
waveIn.StopRecording();
}
else
System.Threading.Thread.Sleep(50);
}
// flush output to finish MP3 file correctly
wri.Flush();
// Dispose of objects
waveIn.Dispose();
wri.Dispose();
}
static void waveIn_RecordingStopped(object sender, StoppedEventArgs e)
{
// signal that recording has finished
stopped = true;
}
static void waveIn_DataAvailable(object sender, WaveInEventArgs e)
{
// write recorded data to MP3 writer
if (wri != null)
wri.Write(e.Buffer, 0, e.BytesRecorded);
}
}
}
At the moment the NAudio.Lame package on NuGet is compiled for x86 only, so make sure your application is set to target that platform.
To convert the audio to MP3 on the fly, one of the easiest ways is to use the command line options that let you pass audio into LAME.exe via stdin. I describe how to do that in this article.
You may also be interested that Corey Murtagh has created a LAME package for NAudio. I haven't tried it myself, but it looks like it should also do the job. Documentation is here.