I want to synchronize the playback of a song to a timer so that I can keep the beats of a song in sync with things rendered on the screen. Any way of accomplishing this using NAudio?
Several out the output devices in NAudio support the IWavePosition interface, which gives a more accurate indication of where the soundcard is currently up to in the buffer it is playing. Usually this is reported in terms of number of bytes that have been played since playback started - so it does not necessarily correspond to the position within the file you are playing or within a song. So if you use this you will need to keep track of when you started playing.
Usually you would keep the things rendered on screen synchronized to the audio playback position, rather than the other way round.
Related
I'm currently passing my time by creating a piano app. Each Key is represented of a simple button with a command which fires at click. This leads to executing this Method in ViewModel:
private void PlaySound(object parameter)
{
var mediaPlayer = new MediaPlayer();
mediaPlayer.Open(new System.Uri(#SoundBar.GetSoundPathByIdent(int.Parse(parameter.ToString()))));
mediaPlayer.Play();
{
I think the problem is that the MediaPlayer leaves a WeakReference which prevents the GarbageCollector from collecting it. Leading to overloading your RAM after playing a while.
The solution i found was to call: mediaPlayer.Close();
But this should only happen after the sound has finished playing, otherwise it will be cut.
Is there a way to check if the played sound has finished playing?
I have already spent some time doing research and testing but i couldn't come up with a working solution.
The Position and NaturalDuration properties give you details about where in the stream you're at (ie. Position / NaturalDuration gives you a value between 0.0 and 1.0 that represents the playback position as a percentage)
But you may want to build an "orchestrator" for your media playback. Assuming that you don't want to play all sounds at the same time, an orchestrator could be responsible for managing the lifetime of your media players, and determining where in the playback they are.
In your application you could create a single instance of the orchestrator on start up. The orchestrator could create and manage a pool of media players it could reuse when it needs to play a chord. Then your piano app could support a certain number of chords at the same time and have a single poller that determines which media players are free and which are "busy" playing audio.
I'm creating an audio player with WPF and NAudio in C#.
Whenever the performance of my computer is low, the audio starts to lag extremely which sounds aweful. I noticed that this does not seem to be the case for similar applications like Spotify or Windows Media Player.
How can I increase the performance of the audio thread? Is there a way to give it priority before other threads?
Edit: Code
WavePlayer = new WaveOut();
source = new AudioFileReader(Filepath)
WavePlayer.Init(source);
WavePlayer.Play();
By default, in a WinForms / WPF app, WaveOut will use the UI thread to fill the audio buffers. If you use WaveOutEvent instead, you'll get a background thread doing that work for you. WasapiOut and DirectSoundOut also work this way.
Remember that if you can't fill buffers in a timely fashion you will get stuttering/drop outs in audio. So if switching driver model doesn't work for you, you might need to optimise your audio code, or increase the buffer durations.
The reason I want to do this is to be able to layer the background music. (e.g, simple song starts playing, player triggers something, adds an instrument). I can work out the timing issues, if any.
I thought I could do that with MediaPlayer/Song, but it wouldn't work.
All I'm really looking for is the downsides to use SoundEffectInstance.
p.s, I don't use XACT, since I'll be changing over to MonoGame eventually.
Thanks
Actually, that's what the SoundEffectInstance is for!
It has limitations though, depending on the platform your game is running:
On Windows Phone, a game can have a maximum of 16 total playing
SoundEffectInstance instances at one time, combined across all loaded
SoundEffect objects. The only limit to the total number of loaded
SoundEffectInstance and SoundEffect objects is available memory.
However, the user can play only 16 sound effects at one time. Attempts
to play a SoundEffectInstance beyond this limit will fail. On Windows,
there is no hard limit. Playing too many instances can lead to
performance degradation. On Xbox 360, the limit is 300 sound effect
instances loaded or playing. Dispose of old instances if you need
more.
Oh and by the way, it's been a long time since I played with XNA but I'm pretty sure that the XACT tool was no longer necessary by the end of it's life cycle.
I seem to recall that you could load an mp3 on the Content folder and play it via the SoundEffectInstance object.
Actually, I think you'll find using the MediaPlayer class combined with the Song class is the recommended way to play background music.
Provides methods and properties to play, pause, resume, and stop songs. MediaPlayer also exposes shuffle, repeat, volume, play position, and visualization capabilities.
I think the primary difference is that the MediaPlayer can stream the data into memory rather than loading it all in at once. So, for long playing music tracks this is the way to go.
Also, in MonoGame these classes are implemented by wrapping around the platform specific classes that do the same thing. For example, on Android the SoundEffectInstance uses the Android SoundPool (intended for sound effects) and the MediaPlayer uses the Android MediaPlayer (intended for music). See this post on the MonoGame forums for reference.
slygamer says: MediaPlayer for background music and SoundEffect for sound effects is how it is designed to be used.
Hi I Want to Decrease the Compression rate/Playing Speed of My Audio Tracks in C# Using NAudio class, i.e. I want tracks to play at a slower speed than their original speed.
Previously I was using Windows Media Player object for just this thing and NAudio for everything else, but I want to shift completely to NAudio.
NAudio does not have a built-in feature to do this. When I need to change playback rate, I create a managed wrapper around the SoundTouch dll. I keep meaning to blog about how to do this, but for now, check out the PracticeSharp project which also uses SoundTouch and NAudio.
Very similar to this question, I have I networked micro-controller which collects PCM audio (8-bit, 8 kHz) and streams the data as raw bytes over a TCP network socket. I am able to connect to the device, open a NetworkStream, and create a RIFF / wave file out of the collected data.
I would like to take it a step further, and enable live playback of the measurements. My approach so far has been to buffer the incoming data into multiple MemoryStream's with an appropriate RIFF header, and when each chunk is complete to use System.Media.SoundPlayer to play the wave file segment. To avoid high latency, each segment is only 0.5 seconds long.
My major issue with this approach is that often there is a distinctive popping sound between segments (since each chunk is not necessarily zero-centered or zero-ended).
Questions:
Is there a more suitable or direct method to playback live streaming PCM audio in C#?
If not, are there additional steps I can take to make the multiple playbacks run more smoothly?
I don;t think you can manage it without popping sounds with the SoundPlayer, because there shouldn't be any delay in pushing the buffers. Normally you should always have one extra buffer buffered. But the SoundPlayer only buffers one buffer. Even when the soundplayer gives an event that it is ready, you're already too late to play a new sound.
I advise you to check this link: Recording and Playing Sound with the Waveform Audio Interface http://msdn.microsoft.com/en-us/library/aa446573.aspx
There are some examples of the SoundPlayer (skip those), but also how to use the WaveOut. Look at the section Playing with WaveOut.
The SoundPlayer is normally used for notification sounds.