I have a double array includes the waves and I want to play it. First I tried this code(for just one wave):
private void DoPlaySound(double p)
{
double[] d=new double[1]{p};
Complex[] c=(DoubleToComplex(d)).ToArray();
FourierTransform.DFT(c,FourierTransform.Direction.Forward);
Stream s = Stream.Null;
StreamWriter w = new StreamWriter(s);
w.Write(c[0].Re);
w.Close();
System.Media.SoundPlayer sndp = new SoundPlayer(s);
sndp.PlayLooping();
}
but System.Media.SoundPlayer.PlayLooping() Needs Wave Header and I haven't any header and I don't know how to generate it.
I also tried it but I don't know how to play wave file using winmm
Try use NAduio
Related
I'm trying to use one of the methods below of the NAudio nuget package:
MediaFoundationEncoder.EncodeToMp3(new RawSourceWaveStream(_audioStream, _audioWriter.WaveFormat), "Test.mp3", 320000);
using (var reader = new WaveFileReader(_audioStream))
{
MediaFoundationEncoder.EncodeToMp3(reader, "Test.mp3", 320000);
}
This should encode the wav stream directly to mp3. Well it does, but the result only consists of noise.
When I save the same MemoryStream(WaveFileWriter) with
File.WriteAllBytes("Test.wav", _audioStream.ToArray());
MediaFoundationEncoder.EncodeToMp3(new WaveFileReader("Test.wav"), "Test.mp3", 320000);
the created wav file is fine and I can use this wav file to encode it to mp3. But I want to avoid saving the wav file first.
Is there a way to encode the stream directly to mp3 without only getting noise? I have no clue why the one method works and the other isn't. Google didn't help me as well. So maybe you guys have any idea.
Thanks for your effort.
Further infos:
.NET Core 3.0 Windows Application
NAudio Nuget Package (Version: 1.9.0)
Complete class:
using NAudio.MediaFoundation;
using NAudio.Wave;
using System;
using System.IO;
namespace Recorder.NAudio
{
public class Recorder
{
private WasapiLoopbackCapture _audioCapture;
private WaveFileWriter _audioWriter;
private MemoryStream _audioStream;
public bool IsRecording { get; private set; }
public Recorder()
{
MediaFoundationApi.Startup();
}
public void Init()
{
_audioCapture = new WasapiLoopbackCapture();
_audioStream = new MemoryStream(1024);
_audioWriter = new WaveFileWriter(_audioStream, new WaveFormat(44100, 24, 2));
_audioCapture.DataAvailable += DataReceived;
_audioCapture.RecordingStopped += RecordingStopped;
}
private void RecordingStopped(object sender, StoppedEventArgs e)
{
_audioCapture.DataAvailable -= DataReceived;
_audioCapture.RecordingStopped -= RecordingStopped;
_audioCapture.Dispose();
_audioCapture = null;
_audioWriter.Flush();
_audioStream.Flush();
_audioStream.Position = 0;
//This works the mp3 file is fine, but I want to avoid the workaround of first saving a wav file to my hard drive
File.WriteAllBytes("Test.wav", _audioStream.ToArray());
MediaFoundationEncoder.EncodeToMp3(new WaveFileReader("Test.wav"), "Test.mp3", 320000);
//Try 1: This doesn't work the mp3 file consists only of noise
MediaFoundationEncoder.EncodeToMp3(new RawSourceWaveStream(_audioStream, _audioWriter.WaveFormat), "Test.mp3", 320000);
//Try 2: This doesn't work the mp3 file consists only of noise
using (var reader = new WaveFileReader(_audioStream))
{
MediaFoundationEncoder.EncodeToMp3(reader, "Test.mp3", 320000);
}
_audioWriter.Close();
_audioWriter.Dispose();
_audioWriter = null;
_audioStream.Close();
_audioStream.Dispose();
_audioStream = null;
GC.Collect();
}
private void DataReceived(object sender, WaveInEventArgs e)
{
_audioWriter.Write(e.Buffer, 0, e.BytesRecorded);
}
public void Start()
{
Init();
_audioCapture.StartRecording();
IsRecording = true;
}
public void Stop()
{
_audioCapture.StopRecording();
IsRecording = false;
}
}
}
I found a solution. When I change constructor of the WaveFileWriter to this:
_audioWriter = new WaveFileWriter(_audioStream, _audioCapture.WaveFormat);
And then change the audio settings of my device in the windows sound settings dialog to:
2 Channels, 24 Bit, 44100Hz instead of 2 Channels, 24 Bit, 96000Hz
it works for some reason...
Part 1
I have some NAudio related code
private void InitAudioOut(DateTime dtNow)
{
_pathOut = string.Format(BaseDirectory + #"\({0:HH-mm-ss dd-MM-yyyy} OUT).wav", dtNow);
_waveOut = new WasapiLoopbackCapture();
_waveOut.DataAvailable += WaveOutDataAvailable;
_waveOut.RecordingStopped += WaveOutRecordStopped;
_waveOutFileStream = new WaveFileWriter(_pathOut, _waveOut.WaveFormat);
_waveOut.StartRecording();
}
With this initialization of the sound recording process I have the followind WaveOutDataAvailable method:
private void WaveOutDataAvailable(object sender, WaveInEventArgs e)
{
var buf= e.Buffer;
_waveOutFileStream.Write(buf, 0, buf.Length);
_waveOutFileStream.Flush();
}
The sound in the resulting file is intermittent and slow, like having "blank" sections between the sound chunks, any ideas are appreciated.
End of part 1
Part 2
There is another version of this code where i'm trying to convert the WAV stream to mp3 stream on the fly and then write it to file, it looks like this:
private void InitAudioIn(DateTime dtNow)
{
_pathIn = string.Format(BaseDirectory + #"\({0:HH-mm-ss dd-MM-yyyy} IN).mp3", dtNow);
_waveIn = new WaveInEvent();
_waveIn.WaveFormat = new WaveFormat(44100, 2);
_waveIn.DataAvailable += WaveInDataAvailable;
_waveIn.RecordingStopped += WaveInRecordStopped;
_waveInFileStream = File.Create(_pathIn);
_waveIn.StartRecording();
}
With the WaveInDataAvailable method as follows:
private void WaveInDataAvailable(object sender, WaveInEventArgs e)
{
var wavToMp3Buffer = ConvertWavToMp3(e.Buffer, _waveIn.WaveFormat);
_waveInFileStream.Write(wavToMp3Buffer, 0, wavToMp3Buffer.Length);
_waveInFileStream.Flush();
}
The ConvertWavToMp3 method:
public byte[] ConvertWavToMp3(byte[] wavContent, WaveFormat waveFormat)
{
using (var baseMemoryStream = new MemoryStream())
using (var wavToMp3Writer = new LameMP3FileWriter(baseMemoryStream, waveFormat, 64))
{
wavToMp3Writer.Write(wavContent, 0, wavContent.Length);
wavToMp3Writer.Flush();
return baseMemoryStream.ToArray();
}
}
If i don't try to convert it to MP3 and just write it as a WAV file that it's absolutely fine, but if i try the MP3 conversion through the ConvertWavToMp3 method then the sound gets slow and intermittent, what is wrong with this implementation?
First part, you are making an invalid assumption about the buffer length being the same as the number of valid bytes in the buffer. Try:
private void WaveOutDataAvailable(object sender, WaveInEventArgs e)
{
_waveOutFileStream.Write(e.Buffer, 0, e.BytesRecorded);
}
Let the output stream handle flushing automatically. Trying to force data to disk like that will either not work or in some cases can cause unexpected results like partial block writes that can interfere with your data. Flush at the end of the recording, not during.
As to the second part...
Your code is creating a file that is the concatenation of a series of MP3 files, one for each buffer passed to your WaveInDataAvailable method, and including all the blank space at the end of those buffers. Of course it's not going to play back properly.
If you want to write an MP3 then do it directly. Make your _waveInFileStream an instance of LameMP3FileWriter and let it handle the work itself. Not only is this going to produce a much more useful output but you save yourself a lot of inefficient messing around with setting up and tearing down the encoder for every data block you receive.
I need to convert a wave that i created inside my app into a bit array and then back.
I have no clue how to start.
This is my clase where i create the sound file.
private void forecast(string forecast)
{
MemoryStream streamAudio = new MemoryStream();
System.Media.SoundPlayer m_SoundPlayer = new System.Media.SoundPlayer();
SpeechSynthesizer speech = new SpeechSynthesizer();
speech.SetOutputToWaveStream(streamAudio);
speech.Speak(forecast);
streamAudio.Position = 0;
m_SoundPlayer.Stream = streamAudio;
m_SoundPlayer.Play();
// Set the synthesizer output to null to release the stream.
speech.SetOutputToNull();
}
After you've called Speak, the data is in the MemoryStream. You can get that to a byte array and do whatever you like:
speech.Speak(forecast);
byte[] speechBytes = streamAudio.ToArray();
speechBytes contains the data you're looking for.
I need a fast method to store all samples of a wav file in an array. I am currently working around this problem by playing the music and storing the values from the Sample Provider, but this is not very elegant.
From the NAudio Demo I have the Audioplayer Class with this Method:
private ISampleProvider CreateInputStream(string fileName)
{
if (fileName.EndsWith(".wav"))
{
fileStream = OpenWavStream(fileName);
}
throw new InvalidOperationException("Unsupported extension");
}
var inputStream = new SampleChannel(fileStream, true);
var sampleStream = new NotifyingSampleProvider(inputStream);
SampleRate = sampleStream.WaveFormat.SampleRate;
sampleStream.Sample += (s, e) => { aggregator.Add(e.Left); }; // at this point the aggregator gets the current sample value, while playing the wav file
return sampleStream;
}
I want to skip this progress of getting the sample values while playing the file, instead I want the values immediatly without waiting till the end of the file. Basically like the wavread command in matlab.
Use AudioFileReader to read the file. This will automatically convert to IEEE float samples. Then repeatedly call the Read method to read a block of samples into a float[] array.
I need to read a certain amount of short (int16) data points from a binary file, starting at a specific position. Thanks!
Something like this should do it for you:
private IEnumerable<Int16> getShorts(string fileName, int start, int count)
using(var stream = File.OpenRead(fileName))
{
stream.Seek(start);
var reader = new BinaryReader(stream);
var list = new List<int16>(count);
for(var i = 0;i<count;i++)
{
list.Add(reader.ReadInt16());
}
}
which is basically what CAsper wrote just in code
You can simply call the Seek method on the Stream that you pass to BinaryReader to the position in the file you want to start reading from.
Then, once you pass the stream to BinaryReader, you can call the ReadInt16 method as many times as you need to.