Storing a wav file in an array - c#

I need a fast method to store all samples of a wav file in an array. I am currently working around this problem by playing the music and storing the values from the Sample Provider, but this is not very elegant.
From the NAudio Demo I have the Audioplayer Class with this Method:
private ISampleProvider CreateInputStream(string fileName)
{
if (fileName.EndsWith(".wav"))
{
fileStream = OpenWavStream(fileName);
}
throw new InvalidOperationException("Unsupported extension");
}
var inputStream = new SampleChannel(fileStream, true);
var sampleStream = new NotifyingSampleProvider(inputStream);
SampleRate = sampleStream.WaveFormat.SampleRate;
sampleStream.Sample += (s, e) => { aggregator.Add(e.Left); }; // at this point the aggregator gets the current sample value, while playing the wav file
return sampleStream;
}
I want to skip this progress of getting the sample values while playing the file, instead I want the values immediatly without waiting till the end of the file. Basically like the wavread command in matlab.

Use AudioFileReader to read the file. This will automatically convert to IEEE float samples. Then repeatedly call the Read method to read a block of samples into a float[] array.

Related

How to record an input device with more than 2 channels to mp3 format

I am building a recording software for recording all connected devices to PC into mp3 format.
Here is my code:
IWaveIn _captureInstance = inputDevice.DataFlow == DataFlow.Render ?
new WasapiLoopbackCapture(inputDevice) : new WasapiCapture(inputDevice)
var waveFormatToUse = _captureInstance.WaveFormat;
var sampleRateToUse = waveFormatToUse.SampleRate;
var channelsToUse = waveFormatToUse.Channels;
if (sampleRateToUse > 48000) // LameMP3FileWriter doesn't support a rate more than 48000Hz
{
sampleRateToUse = 48000;
}
else if (sampleRateToUse < 8000) // LameMP3FileWriter doesn't support a rate less than 8000Hz
{
sampleRateToUse = 8000;
}
if (channelsToUse > 2) // LameMP3FileWriter doesn't support a number of channels more than 2
{
channelsToUse = 2;
}
waveFormatToUse = WaveFormat.CreateCustomFormat(_captureInstance.WaveFormat.Encoding,
sampleRateToUse,
channelsToUse,
_captureInstance.WaveFormat.AverageBytesPerSecond,
_captureInstance.WaveFormat.BlockAlign,
_captureInstance.WaveFormat.BitsPerSample);
_mp3FileWriter = new LameMP3FileWriter(_currentStream, waveFormatToUse, 32);
This code works properly, except the cases when a connected device (also virtual as SteelSeries Sonar) has more than 2 channels.
In the case with more than 2 channels all recordings with noise only.
How can I solve this issue? It isn't required to use only LameMP3FileWriter, I only need it will mp3 or any format with good compression. Also if it's possible without saving intermediate files on the disk (all processing in memory), only the final file with audio.
My recording code:
// When the capturer receives audio, start writing the buffer into the mentioned file
_captureInstance.DataAvailable += (s, a) =>
{
lock (_writerLock)
{
// Write buffer into the file of the writer instance
_mp3FileWriter?.Write(a.Buffer, 0, a.BytesRecorded);
}
};
// When the Capturer Stops, dispose instances of the capturer and writer
_captureInstance.RecordingStopped += (s, a) =>
{
lock (_writerLock)
{
_mp3FileWriter?.Dispose();
}
_captureInstance?.Dispose();
};
// Start audio recording
_captureInstance.StartRecording();
If LAME doesn't support more than 2 channels, you can't use this encoder for your purpose. Have you tried it with the Fraunhofer surround MP3 encoder?
Link: https://download.cnet.com/mp3-surround-encoder/3000-2140_4-165541.html
Also, here's a nice article discussing how to convert between most audio formats (with C# code samples): https://www.codeproject.com/articles/501521/how-to-convert-between-most-audio-formats-in-net

NAudio WasapiLoopbackCapture intermittent sound

Part 1
I have some NAudio related code
private void InitAudioOut(DateTime dtNow)
{
_pathOut = string.Format(BaseDirectory + #"\({0:HH-mm-ss dd-MM-yyyy} OUT).wav", dtNow);
_waveOut = new WasapiLoopbackCapture();
_waveOut.DataAvailable += WaveOutDataAvailable;
_waveOut.RecordingStopped += WaveOutRecordStopped;
_waveOutFileStream = new WaveFileWriter(_pathOut, _waveOut.WaveFormat);
_waveOut.StartRecording();
}
With this initialization of the sound recording process I have the followind WaveOutDataAvailable method:
private void WaveOutDataAvailable(object sender, WaveInEventArgs e)
{
var buf= e.Buffer;
_waveOutFileStream.Write(buf, 0, buf.Length);
_waveOutFileStream.Flush();
}
The sound in the resulting file is intermittent and slow, like having "blank" sections between the sound chunks, any ideas are appreciated.
End of part 1
Part 2
There is another version of this code where i'm trying to convert the WAV stream to mp3 stream on the fly and then write it to file, it looks like this:
private void InitAudioIn(DateTime dtNow)
{
_pathIn = string.Format(BaseDirectory + #"\({0:HH-mm-ss dd-MM-yyyy} IN).mp3", dtNow);
_waveIn = new WaveInEvent();
_waveIn.WaveFormat = new WaveFormat(44100, 2);
_waveIn.DataAvailable += WaveInDataAvailable;
_waveIn.RecordingStopped += WaveInRecordStopped;
_waveInFileStream = File.Create(_pathIn);
_waveIn.StartRecording();
}
With the WaveInDataAvailable method as follows:
private void WaveInDataAvailable(object sender, WaveInEventArgs e)
{
var wavToMp3Buffer = ConvertWavToMp3(e.Buffer, _waveIn.WaveFormat);
_waveInFileStream.Write(wavToMp3Buffer, 0, wavToMp3Buffer.Length);
_waveInFileStream.Flush();
}
The ConvertWavToMp3 method:
public byte[] ConvertWavToMp3(byte[] wavContent, WaveFormat waveFormat)
{
using (var baseMemoryStream = new MemoryStream())
using (var wavToMp3Writer = new LameMP3FileWriter(baseMemoryStream, waveFormat, 64))
{
wavToMp3Writer.Write(wavContent, 0, wavContent.Length);
wavToMp3Writer.Flush();
return baseMemoryStream.ToArray();
}
}
If i don't try to convert it to MP3 and just write it as a WAV file that it's absolutely fine, but if i try the MP3 conversion through the ConvertWavToMp3 method then the sound gets slow and intermittent, what is wrong with this implementation?
First part, you are making an invalid assumption about the buffer length being the same as the number of valid bytes in the buffer. Try:
private void WaveOutDataAvailable(object sender, WaveInEventArgs e)
{
_waveOutFileStream.Write(e.Buffer, 0, e.BytesRecorded);
}
Let the output stream handle flushing automatically. Trying to force data to disk like that will either not work or in some cases can cause unexpected results like partial block writes that can interfere with your data. Flush at the end of the recording, not during.
As to the second part...
Your code is creating a file that is the concatenation of a series of MP3 files, one for each buffer passed to your WaveInDataAvailable method, and including all the blank space at the end of those buffers. Of course it's not going to play back properly.
If you want to write an MP3 then do it directly. Make your _waveInFileStream an instance of LameMP3FileWriter and let it handle the work itself. Not only is this going to produce a much more useful output but you save yourself a lot of inefficient messing around with setting up and tearing down the encoder for every data block you receive.

Convertin Wav file to bit array and back ( streamaudio )

I need to convert a wave that i created inside my app into a bit array and then back.
I have no clue how to start.
This is my clase where i create the sound file.
private void forecast(string forecast)
{
MemoryStream streamAudio = new MemoryStream();
System.Media.SoundPlayer m_SoundPlayer = new System.Media.SoundPlayer();
SpeechSynthesizer speech = new SpeechSynthesizer();
speech.SetOutputToWaveStream(streamAudio);
speech.Speak(forecast);
streamAudio.Position = 0;
m_SoundPlayer.Stream = streamAudio;
m_SoundPlayer.Play();
// Set the synthesizer output to null to release the stream.
speech.SetOutputToNull();
}
After you've called Speak, the data is in the MemoryStream. You can get that to a byte array and do whatever you like:
speech.Speak(forecast);
byte[] speechBytes = streamAudio.ToArray();
speechBytes contains the data you're looking for.

Processing 16bit sample audio

Right now i have an audio file (2 Channels, 44.1kHz Sample Rate, 16bit Sample size, WAV) I would like to pass it into this method but i am not sure of any way to convert the WAV file to a byte array.
/// <summary>
/// Process 16 bit sample
/// </summary>
/// <param name="wave"></param>
public void Process(ref byte[] wave)
{
_waveLeft = new double[wave.Length / 4];
_waveRight = new double[wave.Length / 4];
if (_isTest == false)
{
// Split out channels from sample
int h = 0;
for (int i = 0; i < wave.Length; i += 4)
{
_waveLeft[h] = (double)BitConverter.ToInt16(wave, i);
_waveRight[h] = (double)BitConverter.ToInt16(wave, i + 2);
h++;
}
}
else
{
// Generate artificial sample for testing
_signalGenerator = new SignalGenerator();
_signalGenerator.SetWaveform("Sine");
_signalGenerator.SetSamplingRate(44100);
_signalGenerator.SetSamples(16384);
_signalGenerator.SetFrequency(5000);
_signalGenerator.SetAmplitude(32768);
_waveLeft = _signalGenerator.GenerateSignal();
_waveRight = _signalGenerator.GenerateSignal();
}
// Generate frequency domain data in decibels
_fftLeft = FourierTransform.FFTDb(ref _waveLeft);
_fftRight = FourierTransform.FFTDb(ref _waveRight);
}
Edit Hi sorry for the confusion. I'm currently new to audio signalling so my explanation of what I might like to get is wrong. For this method to work correctly, i believe i need to pass in the byte array of the data chunk in the wav file only. The end result would be to apply fft on it as shown in the code and transform it to a spectrogram. Thanks.
you need:
using System.IO;
and this code to get the byte array
byte[] data = File.ReadAllBytes(PathToFile);
where PathToFile is the Location (as String) of the .wav file.
Edit:
Right now i have an audio file (2 Channels, 44.1kHz Sample Rate, 16bit Sample size, WAV) I would like to pass it into this method but i am not sure of any way to convert the WAV file to a byte array.
He asks for a function to get the byte array from the .wav file he didn't say anything about getting the specific part of the byte array that contains the data of the music.
So Downvoting a correct answer is..

How do I read shorts from a binary file starting at position x, for y values?

I need to read a certain amount of short (int16) data points from a binary file, starting at a specific position. Thanks!
Something like this should do it for you:
private IEnumerable<Int16> getShorts(string fileName, int start, int count)
using(var stream = File.OpenRead(fileName))
{
stream.Seek(start);
var reader = new BinaryReader(stream);
var list = new List<int16>(count);
for(var i = 0;i<count;i++)
{
list.Add(reader.ReadInt16());
}
}
which is basically what CAsper wrote just in code
You can simply call the Seek method on the Stream that you pass to BinaryReader to the position in the file you want to start reading from.
Then, once you pass the stream to BinaryReader, you can call the ReadInt16 method as many times as you need to.

Categories

Resources