I'd like to be able to mix the microphone output with a mp3-File, and output that to a specific device.
I got playing the mp3-File to a specific device working:
Mp3FileReader reader = new Mp3FileReader("C:\\Users\\Victor\\Music\\Musik\\Attack.mp3");
var waveOut = new WaveOut();// or WaveOutEvent()
waveOut.DeviceNumber = deviceId; //deviceId, like 0 or 1
waveOut.Init(reader);
waveOut.Play();
So would I would like to be able to do is basically send the microphone output always to specific output and overlay that output to that specific device with the sound of a mp3-file when for example a button is pressed.
Now is what I'm trying to do possible with naudio and if so how would I go about it?
Thanks!
The basic strategy is to put the audio rececived from the microphone into a BufferedWaveProvider. Then turn that into an ISampleProvider with the ToSampleProvider extension method. Now you can pass that into a MixingSampleProvider. Then play from the MixingSampleProvider. Now at any time you can mix in other audio by adding an input into the MixingSampleProvider
Related
I am trying to record two audio input (webcam + microphone) and one video input (webcam) via MediaCapture from C#. Just 1 audio and 1 video input works like a charm but the class itself does not allow to specify two audio input device ids.
An example:
var captureDeviceSettings = new MediaCaptureInitializationSettings
{
VideoDeviceId = videoDeviceId, #Webcam video input
AudioDeviceId = audioDeviceId, #Webcam audio input
StreamingCaptureMode = StreamingCaptureMode.AudioAndVideo,
}
I thought about using audio graph and using a submix node but, I need to specify a device ID for the media capture, the submix node does not give that. Further on, using the output device of audio graph seems not like the solution, I do not want to play microphone to default output device. Tried that but sounds horrible. I thought about creating a virtual audio device but I don't know how.
Any suggestions on that?
MediaCapture won't be able to do this, WASAPI maybe possibile, but it is not trivial to do.
Best option may be to utilize https://learn.microsoft.com/en-us/windows/win32/coreaudio/loopback-recording.
You will still have to mux in video stream though if you get loopback recording to work.
I am using a MediaPlaybackList to essentially 'stream' audio data coming in via Bluetooth as a byte[] on a fixed time gather interval. According to the MS documentation, MediaPlaybackList is 'gapless' playback between audio samples. But in my case, I have a popping sound and gap when transitioning to the next audio sample.
byte[] audioContent = new byte[audioLength];
chatReader.ReadBytes(audioContent);
MediaPlaybackItem mediaPlaybackItem = new MediaPlaybackItem(MediaSource.CreateFromStream(new MemoryStream(audioContent).AsRandomAccessStream(), "audio/mpeg"));
playbackList.Items.Add(mediaPlaybackItem);
if (_mediaPlayerElement.MediaPlayer.PlaybackSession.PlaybackState != MediaPlaybackState.Playing)
{
_mediaPlayerElement.MediaPlayer.Play(); ;
}
How can I achieve truly 'gapless' streaming audio using a method similar to this?
Also, I have tried writing my stream to a file realtime as the data comes in just to see if the popping sound or the gap is there. It plays from the file that the bytes are appended to perfectly with no pop or gap.
using (var stream = await playbackFile.OpenStreamForWriteAsync())
{
stream.Seek(0, SeekOrigin.End);
await stream.WriteAsync(audioContent, 0, audioContent.Length);
}
The MediaPlayer and in particular the MediaPlayerList is not designed to be used with a "live" audio stream. You must finish writing the data to the byte stream before adding it to the list and starting the MediaPlayer. Using the MediaPlayer is not the correct solution for this particular scenario.
A better solution would be to use the Audio Graph. The Audio Graph allows you to add input sources from actual audio endpoints so you don't need to fill the byte buffer with the streaming audio. You can then use sub-mix nodes to mix between the audio endpoint streams with no clicks or pops.
I'm trying to get image from a usb device using Aforge (directShow). The device (USB3HDCAP) has 3 diferent inputs (HDMI, DVI and S-Video). Using the code above, i can access and get the default input image (only from HDMI). However, when I change the physical input on the device (from HDMI to DIV, example) the image is black. What can i do to get video from other input (DVI or S-Video).
LocalWebCamsCollection = new FilterInfoCollection(FilterCategory.VideoInputDevice);
LocalWebCam = new VideoCaptureDevice(LocalWebCamsCollection[0].MonikerString);
LocalWebCam.NewFrame += new NewFrameEventHandler(Cam_NewFrame);
LocalWebCam.Start();
Your code snippet is what just captures video. To switch inputs on video capture hardware you need to use crossbar to re-configure the device.
In plain DirectShow it is like his:
Crossbar filter change current input to Composite
DirectShow USB webcam changing video source
With AForge.NET you should be looking up for a similar method, e.g. see:
Start Capturing From S-Video
... VideoCaptureDivece.AvailableCrossbarVideoInputs gives array of available video inputs. VideoCaptureDivece.CrossbarVideoInput accepts what? - yes video input. So combine those two together:
VideoKaynagi.CrossbarVideoInput = CrossbarVideoInput.AvailableCrossbarVideoInputs[0];
Of course you need to change 0 with an index of the S-Video input.
I'm recording sound via the WasapiLoopbackCapture and write it to an MP3-File via the NAudio.Lame lib:
LAMEPreset quality = LAMEPreset.ABR_320;
audiostream = new WasapiLoopbackCapture();
audiostream.DataAvailable += stream_DataAvailable;
audiostream.RecordingStopped += stream_RecordingStopped;
mp3writer = new LameMP3FileWriter(Environment.GetEnvironmentVariable("USERPROFILE") + #"\Music\record_temp.mp3",
audiostream.WaveFormat, quality);
audiostream.StartRecording();
When the user presses the stop-recording-button, I save the MP3 and stop the recording:
mp3writer.Flush();
audiostream.Dispose();
mp3writer.Dispose();
All works fine, except that the output file has some disturbing crackle noises in it. (See here for example). I think it might be the case, that my computer is a bit to slow to do the process of compressing and writing the audio data in realtime, so some of the values get lost, but that is just my guess
Edit: When recording to WAVE, the errors dont appear.
What may be the problem here and how could I possibly solve it / work around it?
start off by saving your audio to a WAV file. Does that have crackles in it? If so the crackles are coming from the soundcard. If not, they are coming from the encoding to MP3 code.
I'm recording in NAudio with a PS3Eye camera, using CLEye drivers.
The camera has a 4 microphone array, and presents 4 channels of audio to the system.
By default, all of the channels are being recorded by NAudio. I'm recording to PCM wave, and getting a 4-channel WAV output file.
When I try to play the file in NAudio, I receive an MmException 'NoDriver' calling acmFormatSuggest. Stereo files play fine.
My sound card can only output 2 channels, which appears to cause the error. Setting my Windows audio settings to Quadraphonic does not resolve this issue.
Perhaps I can ask NAudio to record only 2 channels, or implement my own WaveStream somewhere?
Does anybody have any ideas for down-sampling the number of channels in NAudio? (preferably at record time). Big thanks!
isn't it as simple as declaring it in your WaveFormat? 12000 is the sample rate and "1" is the number of channels.
waveInStream = new WaveIn(WaveCallbackInfo.FunctionCallback());
waveInStream.WaveFormat = new WaveFormat(12000, 1);
waveInStream.DeviceNumber = 2;
AudioWriter = new WaveFileWriter("c:\\MyAudio.wav", waveInStream.WaveFormat);
waveInStream.DataAvailable += waveInStream_DataAvailable;
waveInStream.StartRecording();
thats how i setup my code for a basic webcam recorder and with stereo mics it outputs mono wav files.