I am making a game in which I need to stream SOME of the audio to other players. Right now I am using OnAudioFIlterRead() on the AudioListener to get ALL audio as a buffer, and I have managed to stream this using Photon Voice.
However, I only want to stream certain audio sources (about 18) which make up the music part of my music game. There will also be voice chat (which is streamed separately) and maybe other audio sources that I do not want to stream.
I could assign these audio clips to a mixer track, but I have not found a way to get audio from a mixer track as a buffer? Does anyone have a solution?
Thanks!
Related
I try to save a video with audio and save it as an uncompressed avi file. The graph is as you can see in the picture. The problem is that the sound recording is ~500ms behind the video. It doesn't matter which sources I have. What can I do to have video and audio in sync?
Default audio capture buffer is pretty large and is about 500 ms in length. You start getting the data once the buffer is filled and hence the lag. Large buffers might be okay for some scenarios and are not good for other. You can use IAMBufferNegotiation interface to adjust the buffering.
See related (you will see 500 ms lag is a typical complaint):
Audio Sync problems using DirectShow.NET
Minimizing audio capture latency in DirectShow
Delay in video in DirectShow graph
I have a video file and I want to play it on one computer (preferably with C#), but stream the audio to an android device and let it play there synchronous to the video content over network.
Do you have any tips how I can achieve that?
Any library or code examples are welcome :)
I am working on a project where I am playing a wave file using naudio to someone over the phone through a softphone. The person making the call, wearing a usb headset (which is an external soundcard) would need to be able to speak along with the wave files. Right now I'm running a dual 3.5mm audio cable from the output into the input on the back of the computer to make this happen. This is making me have to use the onboard sound card for the wave and the head set for the person to speak which means I have to switch the default input audio device on-demand to allow the person to be heard or the wave to be heard. This causes issue with the softphone app depending on how its devices are set. I want to cut out the onboard soundcard altogether. I want to send both my wave audio and the person speaking, into the same input device.
When I play the audio this is the code I call:
WaveStream waveStream = new WaveFileReader(#"C:\Users\Public\Music\tester.wav");
WaveOut waveOut = new WaveOut();
waveOut.DeviceNumber = int.Parse(device1);
WaveOut.Init(waveStream);
WaveOut.Play();
At this point I would love to not only send to the selected output device but also to an input device as well. Is there any simple ways I can do this? Thanks for your help in advance.
Unfortunately, the various Windows audio APIs provide no way for you to replace the audio received by an actual input device with your own audio, so this isn't something you can achieve out of the box with NAudio. What you need is to create a virtual audio input device. This is quite a lot of work, so the simplest solution is to purchase something like Virtual Audio Cable, which will allow you to create virtual sound devices and patch them together.
I have been searching all over for a way to display the audio intensity of an mp4 file. I have found many guides on how to do it with wav files and even audio being actively recorded, but I can't find anything about mp4s.
I have a C# windows form that plays a video and allows you to caption it. What I am trying to do next is add a visual representation of the audio intensity so the user can see where the next chunk of speech is. To play the video I am using Windows Media Player
You can refer to this other stackoverflow question:
Howplay mp4 songs using NAudio
Instead of playing the file you can show the current level with some widget. Just pay attention to the fact that you should probably display the level in dB:
dB = 20 * log10(amplitude/maxAmplitude)
Very similar to this question, I have I networked micro-controller which collects PCM audio (8-bit, 8 kHz) and streams the data as raw bytes over a TCP network socket. I am able to connect to the device, open a NetworkStream, and create a RIFF / wave file out of the collected data.
I would like to take it a step further, and enable live playback of the measurements. My approach so far has been to buffer the incoming data into multiple MemoryStream's with an appropriate RIFF header, and when each chunk is complete to use System.Media.SoundPlayer to play the wave file segment. To avoid high latency, each segment is only 0.5 seconds long.
My major issue with this approach is that often there is a distinctive popping sound between segments (since each chunk is not necessarily zero-centered or zero-ended).
Questions:
Is there a more suitable or direct method to playback live streaming PCM audio in C#?
If not, are there additional steps I can take to make the multiple playbacks run more smoothly?
I don;t think you can manage it without popping sounds with the SoundPlayer, because there shouldn't be any delay in pushing the buffers. Normally you should always have one extra buffer buffered. But the SoundPlayer only buffers one buffer. Even when the soundplayer gives an event that it is ready, you're already too late to play a new sound.
I advise you to check this link: Recording and Playing Sound with the Waveform Audio Interface http://msdn.microsoft.com/en-us/library/aa446573.aspx
There are some examples of the SoundPlayer (skip those), but also how to use the WaveOut. Look at the section Playing with WaveOut.
The SoundPlayer is normally used for notification sounds.