NAudio record and save microphone input and speaker output - c#

I want to record conversations over Skype or similar applications (these recordings will be processed after being saved). I was trying to accomplish that with NAudio.
So far I managed to record speaker audio using WasapiLoopbackCapture and save it to a WAV file, also I managed to record and save microphone audio using WaveIn. The main problem is that I cannot mix these 2 files into a single file, as stated in the following link: https://github.com/naudio/NAudio/blob/master/Docs/MixTwoAudioFilesToWav.md
The function where I start my recording looks like this:
waveSourceSpeakers = new WasapiLoopbackCapture();
string outputFilePath = #"xxxx\xxx\xxx";
waveFileSpeakers = new WaveFileWriter(outputFilePath, waveSourceSpeakers.WaveFormat);
waveSourceSpeakers.DataAvailable += (s, a) =>
{
waveFileSpeakers.Write(a.Buffer, 0, a.BytesRecorded);
};
waveSourceSpeakers.RecordingStopped += (s, a) =>
{
waveFileSpeakers.Dispose();
waveFileSpeakers = null;
waveSourceSpeakers.Dispose();
};
waveSourceSpeakers.StartRecording();
waveSourceMic = new WaveIn();
waveSourceMic.WaveFormat = new WaveFormat(44100, 1);
waveSourceMic.DataAvailable += new EventHandler<WaveInEventArgs>(waveSource_DataAvailable);
waveSourceMic.RecordingStopped += new EventHandler<StoppedEventArgs>(waveSource_RecordingStopped);
waveFileMic = new WaveFileWriter(#"xxxx\xxx\xxx", waveSourceMic.WaveFormat);
waveSourceMic.StartRecording();
The function where I try to mix my 2 wav files looks like this:
using (var reader1 = new AudioFileReader(#"xxx\xxx\file1.wav"))
using (var reader2 = new AudioFileReader(#"xxx\xxx\file2.wav"))
{
var mixer = new MixingSampleProvider(new[] { reader1, reader2 });
WaveFileWriter.CreateWaveFile16(#"xxxx\xxx\mixed.wav", mixer);
}
and I get this exception: System.ArgumentException: 'All mixer inputs must have the same WaveFormat' while trying to create MixingSampleProvider.
I was wondering if I am using the right ways to record both audios? Also, it would be great if there is a way to record both audios in one file, but I'm not sure if that is possible.

All mixer inputs must have the same WaveFormat
hints to that yours don't.
Change the line
waveSourceMic.WaveFormat = new WaveFormat(44100, 1);
to
waveSourceMic.WaveFormat = waveSourceSpeakers.WaveFormat;
So, now you will be using the same Format for both Mic and Speakers and the mixer should be fine.

Related

How to record an input device with more than 2 channels to mp3 format

I am building a recording software for recording all connected devices to PC into mp3 format.
Here is my code:
IWaveIn _captureInstance = inputDevice.DataFlow == DataFlow.Render ?
new WasapiLoopbackCapture(inputDevice) : new WasapiCapture(inputDevice)
var waveFormatToUse = _captureInstance.WaveFormat;
var sampleRateToUse = waveFormatToUse.SampleRate;
var channelsToUse = waveFormatToUse.Channels;
if (sampleRateToUse > 48000) // LameMP3FileWriter doesn't support a rate more than 48000Hz
{
sampleRateToUse = 48000;
}
else if (sampleRateToUse < 8000) // LameMP3FileWriter doesn't support a rate less than 8000Hz
{
sampleRateToUse = 8000;
}
if (channelsToUse > 2) // LameMP3FileWriter doesn't support a number of channels more than 2
{
channelsToUse = 2;
}
waveFormatToUse = WaveFormat.CreateCustomFormat(_captureInstance.WaveFormat.Encoding,
sampleRateToUse,
channelsToUse,
_captureInstance.WaveFormat.AverageBytesPerSecond,
_captureInstance.WaveFormat.BlockAlign,
_captureInstance.WaveFormat.BitsPerSample);
_mp3FileWriter = new LameMP3FileWriter(_currentStream, waveFormatToUse, 32);
This code works properly, except the cases when a connected device (also virtual as SteelSeries Sonar) has more than 2 channels.
In the case with more than 2 channels all recordings with noise only.
How can I solve this issue? It isn't required to use only LameMP3FileWriter, I only need it will mp3 or any format with good compression. Also if it's possible without saving intermediate files on the disk (all processing in memory), only the final file with audio.
My recording code:
// When the capturer receives audio, start writing the buffer into the mentioned file
_captureInstance.DataAvailable += (s, a) =>
{
lock (_writerLock)
{
// Write buffer into the file of the writer instance
_mp3FileWriter?.Write(a.Buffer, 0, a.BytesRecorded);
}
};
// When the Capturer Stops, dispose instances of the capturer and writer
_captureInstance.RecordingStopped += (s, a) =>
{
lock (_writerLock)
{
_mp3FileWriter?.Dispose();
}
_captureInstance?.Dispose();
};
// Start audio recording
_captureInstance.StartRecording();
If LAME doesn't support more than 2 channels, you can't use this encoder for your purpose. Have you tried it with the Fraunhofer surround MP3 encoder?
Link: https://download.cnet.com/mp3-surround-encoder/3000-2140_4-165541.html
Also, here's a nice article discussing how to convert between most audio formats (with C# code samples): https://www.codeproject.com/articles/501521/how-to-convert-between-most-audio-formats-in-net

MediaStreamSource video streaming in UWP

I just started to experiment with MediaStreamSource in UWP.
I took the MediaStreamSource streaming example from MS and tried to rewrite it to support mp4 instead of mp3.
I changed nothing but the InitializeMediaStreamSource part, it now looks like this:
{
var clip = await MediaClip.CreateFromFileAsync(inputMP3File);
var audioTrack = clip.EmbeddedAudioTracks.First();
var property = clip.GetVideoEncodingProperties();
// initialize Parsing Variables
byteOffset = 0;
timeOffset = new TimeSpan(0);
var videoDescriptor = new VideoStreamDescriptor(property);
var audioDescriptor = new AudioStreamDescriptor(audioTrack.GetAudioEncodingProperties());
MSS = new MediaStreamSource(videoDescriptor)
{
Duration = clip.OriginalDuration
};
// hooking up the MediaStreamSource event handlers
MSS.Starting += MSS_Starting;
MSS.SampleRequested += MSS_SampleRequested;
MSS.Closed += MSS_Closed;
media.SetMediaStreamSource(MSS);
}
My problem is, that I cannot find a single example where video streams are used instead of audio, so I can't figure out what's wrong with my code. If I set the MediaElement's Source property to the given mp4 file, it works like a charm. If I pick an mp3 and leave the videoDescriptor out then as well. But if I try to do the same with a video (I'm still not sure whether I should add the audioDescriptor as a second arg to the MediaStreamSource or not, but because I've got one mixed stream, I guess it's not needed), then nothing happens. The SampleRequested event is triggered. No error is thrown. It's really hard to debug it, it's a real pain in the ass. :S
I have solution to build working video MediaStreamSource from file bitmaps but unfortunately have not found solution for RGBA buffer.
First of all read MediaStreamSource Class documentation https://learn.microsoft.com/en-us/uwp/api/windows.media.core.mediastreamsource
I'm creating MJPEG MediaStreamSource
var MediaStreamSource = new MediaStreamSource(
new VideoStreamDescriptor(
VideoEncodingProperties.CreateUncompressed(
CodecSubtypes.VideoFormatMjpg, size.Width, size.Height
)
)
);
Then initialize some buffer time
MediaStreamSource.BufferTime = TimeSpan.FromSeconds(1);
Then subscribe for event to set requested frame.
MediaStreamSource.SampleRequested += async (MediaStreamSource sender, MediaStreamSourceSampleRequestedEventArgs args) =>
{
var deferal = args.Request.GetDeferral();
try
{
var timestamp = DateTime.Now - startedAt;
var file = await Windows.ApplicationModel.Package.Current.InstalledLocation.GetFileAsync(#"Assets\grpPC1.jpg");
using (var stream = await file.OpenReadAsync())
{
args.Request.Sample = await MediaStreamSample.CreateFromStreamAsync(
stream.GetInputStreamAt(0), (uint)stream.Size, timestamp);
}
args.Request.Sample.Duration = TimeSpan.FromSeconds(5);
}
finally
{
deferal.Complete();
}
};
As you may see in my sample I use CodecSubtypes.VideoFormatMjpg and hardcoded path to jpeg file that I permanently use as MediaStreamSample. We need to research which CodecSubtypes to set to use RGBA (4 byte per pixel) format bitmap like thin
var buffer = new Windows.Storage.Streams.Buffer(size.Width * size.Height * 4);
// latestBitmap is SoftwareBitmap
latestBitmap.CopyToBuffer(buffer);
args.Request.Sample = MediaStreamSample.CreateFromBuffer(buffer, timestamp);

c# cscore how to instantly play loopback capture recorded bytes

Yo guys, its me again with my noob questions. so this time I've used cscore to record windows sounds then send the recorded bytes to another pc by sockets and let them play there.
I just could not figure out how to play the gotten bytes under DataAvailable callback...
I've tried to write the bytes gotten to a file and play that file that worked but sound is not playing correctly like there's some unexpected sounds being heard with it too.
so here's my code:
WasapiCapture capture = new WasapiLoopbackCapture();
capture.Initialize();
capture.DataAvailable += (s, e) =>
{
WaveWriter w = new WaveWriter("file.mp3", capture.WaveFormat);
w.Write(e.Data, e.Offset, e.ByteCount);
w.Dispose();
MemoryStream stream = new MemoryStream(File.ReadAllBytes("file.mp3"));
SoundPlayer player = new SoundPlayer(stream);
player.Play();
stream.Dispose();
};
capture.Start();
any help would be highly appreciated ;-;.
if you wanna hear how sound comes out by that way I would record you the result.
NOTE: if I just record sounds to a file and open later it just works perfectly but if I write and play instantly it unexpected sounds being heard.....
Use the SoundInSource as an adapater.
var capture = new WasapiCapture(...)
capture.Initialize(); //initialize always first!!!!
var soundInSource = new SoundInSource(capture)
{ FillWithZeros = true }; //set FillWithZeros to true, to prevent WasapiOut from stopping for the case WasapiCapture does not serve any data
var soundOut = new WasapiOut();
soundOut.Initialize(soundInSource);
soundOut.Play();

Convertin Wav file to bit array and back ( streamaudio )

I need to convert a wave that i created inside my app into a bit array and then back.
I have no clue how to start.
This is my clase where i create the sound file.
private void forecast(string forecast)
{
MemoryStream streamAudio = new MemoryStream();
System.Media.SoundPlayer m_SoundPlayer = new System.Media.SoundPlayer();
SpeechSynthesizer speech = new SpeechSynthesizer();
speech.SetOutputToWaveStream(streamAudio);
speech.Speak(forecast);
streamAudio.Position = 0;
m_SoundPlayer.Stream = streamAudio;
m_SoundPlayer.Play();
// Set the synthesizer output to null to release the stream.
speech.SetOutputToNull();
}
After you've called Speak, the data is in the MemoryStream. You can get that to a byte array and do whatever you like:
speech.Speak(forecast);
byte[] speechBytes = streamAudio.ToArray();
speechBytes contains the data you're looking for.

change wav file ( to 16KHz and 8bit ) with using NAudio

I want to change a WAV file to 8KHz and 8bit using NAudio.
WaveFormat format1 = new WaveFormat(8000, 8, 1);
byte[] waveByte = HelperClass.ReadFully(File.OpenRead(wavFile));
Wave
using (WaveFileWriter writer = new WaveFileWriter(outputFile, format1))
{
writer.WriteData(waveByte, 0, waveByte.Length);
}
but when I play the output file, the sound is only sizzle. Is my code is correct or what is wrong?
If I set WaveFormat to WaveFormat(44100, 16, 1), it works fine.
Thanks.
A few pointers:
You need to use a WaveFormatConversionStream to actually convert from one sample rate / bit depth to another - you are just putting the original audio into the new file with the wrong wave format.
You may also need to convert in two steps - first changing the sample rate, then changing the bit depth / channel count. This is because the underlying ACM codecs can't always do the conversion you want in a single step.
You should use WaveFileReader to read your input file - you only want the actual audio data part of the file to get converted, but you are currently copying everything including the RIFF chunks as though they were audio data into the new file.
8 bit PCM audio usually sounds horrible. Use 16 bit, or if you must have 8 bit, use G.711 u-law or a-law
Downsampling audio can result in aliasing. To do it well you need to implement a low-pass filter first. This unfortunately isn't easy, but there are sites that help you generate the coefficients for a Chebyshev low pass filter for the specific downsampling you are doing.
Here's some example code showing how to convert from one format to another. Remember that you might need to do the conversion in multiple steps depending on the format of your input file:
using (var reader = new WaveFileReader("input.wav"))
{
var newFormat = new WaveFormat(8000, 16, 1);
using (var conversionStream = new WaveFormatConversionStream(newFormat, reader))
{
WaveFileWriter.CreateWaveFile("output.wav", conversionStream);
}
}
The following code solved my problem dealing with G.711 Mu-Law with a vox file extension to wav file. I kept getting a "No RIFF Header" error with WaveFileReader otherwise.
FileStream fileStream = new FileStream(fileName, FileMode.Open);
var waveFormat = WaveFormat.CreateMuLawFormat(8000, 1);
var reader = new RawSourceWaveStream(fileStream, waveFormat);
using (WaveStream convertedStream = WaveFormatConversionStream.CreatePcmStream(reader))
{
WaveFileWriter.CreateWaveFile(fileName.Replace("vox", "wav"), convertedStream);
}
fileStream.Close();
openFileDialog openFileDialog = new openFileDialog();
openFileDialog.Filter = "Wave Files (*.wav)|*.wav|All Files (*.*)|*.*";
openFileDialog.FilterIndex = 1;
WaveFileReader reader = new NAudio.Wave.WaveFileReader(dpmFileDestPath);
WaveFormat newFormat = new WaveFormat(8000, 16, 1);
WaveFormatConversionStream str = new WaveFormatConversionStream(newFormat, reader);
try
{
WaveFileWriter.CreateWaveFile("C:\\Konvertierten_Dateien.wav", str);
}
catch (Exception ex)
{
MessageBox.Show(String.Format("{0}", ex.Message));
}
finally
{
str.Close();
}
MessageBox.Show("Konvertieren ist Fertig!");
}

Categories

Resources