Very similar to this question, I have I networked micro-controller which collects PCM audio (8-bit, 8 kHz) and streams the data as raw bytes over a TCP network socket. I am able to connect to the device, open a NetworkStream, and create a RIFF / wave file out of the collected data.
I would like to take it a step further, and enable live playback of the measurements. My approach so far has been to buffer the incoming data into multiple MemoryStream's with an appropriate RIFF header, and when each chunk is complete to use System.Media.SoundPlayer to play the wave file segment. To avoid high latency, each segment is only 0.5 seconds long.
My major issue with this approach is that often there is a distinctive popping sound between segments (since each chunk is not necessarily zero-centered or zero-ended).
Questions:
Is there a more suitable or direct method to playback live streaming PCM audio in C#?
If not, are there additional steps I can take to make the multiple playbacks run more smoothly?
I don;t think you can manage it without popping sounds with the SoundPlayer, because there shouldn't be any delay in pushing the buffers. Normally you should always have one extra buffer buffered. But the SoundPlayer only buffers one buffer. Even when the soundplayer gives an event that it is ready, you're already too late to play a new sound.
I advise you to check this link: Recording and Playing Sound with the Waveform Audio Interface http://msdn.microsoft.com/en-us/library/aa446573.aspx
There are some examples of the SoundPlayer (skip those), but also how to use the WaveOut. Look at the section Playing with WaveOut.
The SoundPlayer is normally used for notification sounds.
Related
I try to save a video with audio and save it as an uncompressed avi file. The graph is as you can see in the picture. The problem is that the sound recording is ~500ms behind the video. It doesn't matter which sources I have. What can I do to have video and audio in sync?
Default audio capture buffer is pretty large and is about 500 ms in length. You start getting the data once the buffer is filled and hence the lag. Large buffers might be okay for some scenarios and are not good for other. You can use IAMBufferNegotiation interface to adjust the buffering.
See related (you will see 500 ms lag is a typical complaint):
Audio Sync problems using DirectShow.NET
Minimizing audio capture latency in DirectShow
Delay in video in DirectShow graph
Hi All I have an application which receives simultaneously wav data through multiple threads from different UDP ports:
Is it possible to play all received wav data at same time, simultaneously, using Wave Out API?
Is it possible to play all received wav data at same time, simultaneously, using NAudio? does NAudio objects thread safe?
saying play simultaneously I mean case when a file played in media player and something in YouTube played at the same time and you can hear both sound from your speakers at the same time:
Any help or hint would be appreciated. Thanks in advance.
Yes, you can do it with multiple instances of WaveOut (one for each stream), or you can do it with a single WaveOut and mix it yourself (e.g. with the MixingSampleProvider)
I think that you can do that.
this tutorial can help you :http://msdn.microsoft.com/en-us/magazine/cc163455.aspx
It explane how to integrate a video but I think that the used Library can play audio
I want to synchronize the playback of a song to a timer so that I can keep the beats of a song in sync with things rendered on the screen. Any way of accomplishing this using NAudio?
Several out the output devices in NAudio support the IWavePosition interface, which gives a more accurate indication of where the soundcard is currently up to in the buffer it is playing. Usually this is reported in terms of number of bytes that have been played since playback started - so it does not necessarily correspond to the position within the file you are playing or within a song. So if you use this you will need to keep track of when you started playing.
Usually you would keep the things rendered on screen synchronized to the audio playback position, rather than the other way round.
I am making a VOIP program for fun, and I got it mostly working. Since my last question, another issue has come up. When there are two or more voices being played through the client using a MixingWaveProvider, there are strange stutters, clicks, snaps, and static in the final mixed audio. Most of the time, it sounds like a portion of someone's voice plays, pauses, and lets another person's voice play for a short while. This continues for as long as both are talking (Each voice seems to "take turns" outputting to the waveMixer).
I won't bother posting the Speex encoding/decoding code, as this issue happens with or without it being used. I get the input through a WaveInEvent, which feeds it's information into a UDP network stream. The UDP stream sends the sound data to the other clients.
Here is the code that I use to initialize the WaveOut and MixingWaveProvider32:
waveOut = new DirectSoundOut(settings.GetOutputDevice(), 50);
waveMixer = new MixingWaveProvider32();
waveOut.Init(waveMixer);
waveOut.Play();
When a client connects, I input the received packet data into the user's BufferedWaveProvider:
provider = new BufferedWaveProvider(format) { DiscardOnBufferOverflow = true };
wave16ToFloat = new Wave16ToFloatProvider(provider);
After that, I use this code to add the above 32bit provider to the MixingWaveProvider32:
waveMixer.AddInputStream(wave16ToFloat);
It seems that the issue is less severe with streams added before MixingWaveProvider32 is passed to WaveOut. However, I really need to be able to add them dynamically. Assuming that is why this happens.
This may have something to do with my network implementation, so I will look into that if something else isn't found here. Could it be possible that each voice data packet is blocking the next one from being read, thus causing the back and forth kind of sound? If so, how could I buffer the data on the server longer or wait to send in larger chunks on the client?
Edit:
I am almost sure that this is caused by the BufferedWaveProviders draining completely several times a second. The packets are not filling them fast enough, and they drain leaving nothing left to transmit. As I asked above, is there any way that I can send them from the client in large chunks? Or can I make the buffers drain slower somehow?
Edit 2:
I have now implemented a auto pausing buffer that will make sure it stays filled. The buffer unpauses when it's internal buffer goes above 1 second of sound, and it pauses when the data gets below .5 seconds. However, the buffer hovers around 1 second of sound, and I have checked that it is not running out/pausing the sound mid stream. Though this should be a good thing, the sound distortion still exists, and it is just as bad as before. It seems to be something wrong with the mixer or my setup.
Sounds like you have already diagnosed the problem. If the BufferedWaveProviders aren't filling up then you will get silence. You need to implement some kind of auto-pause that delays playback until there is enough buffered audio. A cheating way to do this is to start off each buffer with five seconds of silence, allowing hopefully another five seconds of audio to be received while this buffer plays out.
How do I play 3 different music tracks at the same time on my computer, such that song1 is played in speaker1, song2 in speaker2...
Is this possible programatically? What additional hardware will I need? Do I need 3 seperate sound cards? Given that the hardware is in place, how would I "route" the sound output for a particular song to a particular speaker.
Alternatively, is there a special hardware that can handle multiple inputs and outputs?
Appreciate your expert opinions.
http://www.esi-audio.com/products/maya44usb+/ Try this with the NAudio c# Libary look at AsioOut and the MultiplexingWaveProvider
Provided that you have as many outputs (total) as you have songs you are all set (I'm assuming you'll just be playing each song in mono). The simplest way to tackle this problem is to open one "stream" for each song and play the song through that stream. You'll have to do some work to open each stream with the right number of channels and ensure that the song is played in the correct channel.
There are two potential problems with that technique: 1. some audio API/hardware combinations don't allow multiple streams to access the same device. This is most commonly an issue on windows/ASIO, but it may be an issue in other cases -- I am not a windows expert. 2. it is a bit tricky to ensure that all streams are exactly synchronized. If you require tight synchronization you should use a single stream and a single hardware device.
If the above issues are a concern, then you should get some audio hardware with at lest three outputs, and open one stream with access to three channels.
You can use PortAudio for audio I/O, and libsoundfile for reading the sound files (of course, there are other options for both these tasks).