I'm creating an audio player with WPF and NAudio in C#.
Whenever the performance of my computer is low, the audio starts to lag extremely which sounds aweful. I noticed that this does not seem to be the case for similar applications like Spotify or Windows Media Player.
How can I increase the performance of the audio thread? Is there a way to give it priority before other threads?
Edit: Code
WavePlayer = new WaveOut();
source = new AudioFileReader(Filepath)
WavePlayer.Init(source);
WavePlayer.Play();
By default, in a WinForms / WPF app, WaveOut will use the UI thread to fill the audio buffers. If you use WaveOutEvent instead, you'll get a background thread doing that work for you. WasapiOut and DirectSoundOut also work this way.
Remember that if you can't fill buffers in a timely fashion you will get stuttering/drop outs in audio. So if switching driver model doesn't work for you, you might need to optimise your audio code, or increase the buffer durations.
Related
The reason I want to do this is to be able to layer the background music. (e.g, simple song starts playing, player triggers something, adds an instrument). I can work out the timing issues, if any.
I thought I could do that with MediaPlayer/Song, but it wouldn't work.
All I'm really looking for is the downsides to use SoundEffectInstance.
p.s, I don't use XACT, since I'll be changing over to MonoGame eventually.
Thanks
Actually, that's what the SoundEffectInstance is for!
It has limitations though, depending on the platform your game is running:
On Windows Phone, a game can have a maximum of 16 total playing
SoundEffectInstance instances at one time, combined across all loaded
SoundEffect objects. The only limit to the total number of loaded
SoundEffectInstance and SoundEffect objects is available memory.
However, the user can play only 16 sound effects at one time. Attempts
to play a SoundEffectInstance beyond this limit will fail. On Windows,
there is no hard limit. Playing too many instances can lead to
performance degradation. On Xbox 360, the limit is 300 sound effect
instances loaded or playing. Dispose of old instances if you need
more.
Oh and by the way, it's been a long time since I played with XNA but I'm pretty sure that the XACT tool was no longer necessary by the end of it's life cycle.
I seem to recall that you could load an mp3 on the Content folder and play it via the SoundEffectInstance object.
Actually, I think you'll find using the MediaPlayer class combined with the Song class is the recommended way to play background music.
Provides methods and properties to play, pause, resume, and stop songs. MediaPlayer also exposes shuffle, repeat, volume, play position, and visualization capabilities.
I think the primary difference is that the MediaPlayer can stream the data into memory rather than loading it all in at once. So, for long playing music tracks this is the way to go.
Also, in MonoGame these classes are implemented by wrapping around the platform specific classes that do the same thing. For example, on Android the SoundEffectInstance uses the Android SoundPool (intended for sound effects) and the MediaPlayer uses the Android MediaPlayer (intended for music). See this post on the MonoGame forums for reference.
slygamer says: MediaPlayer for background music and SoundEffect for sound effects is how it is designed to be used.
I want to synchronize the playback of a song to a timer so that I can keep the beats of a song in sync with things rendered on the screen. Any way of accomplishing this using NAudio?
Several out the output devices in NAudio support the IWavePosition interface, which gives a more accurate indication of where the soundcard is currently up to in the buffer it is playing. Usually this is reported in terms of number of bytes that have been played since playback started - so it does not necessarily correspond to the position within the file you are playing or within a song. So if you use this you will need to keep track of when you started playing.
Usually you would keep the things rendered on screen synchronized to the audio playback position, rather than the other way round.
I have a single WMA file which contains lots of different pieces of audio.
Is there any way I can play part of a sound stream?
Something like:
public static void Play(Stream soundStream, long start, long end);
You may be able to do this using NAudio, it is a audio library for .Net.
Using the example here I was able to throw a quick test application up to try it. Using the WaveSteam.Skip(int seconds) method you are able to start at a specific position in the file. I have not been able to work out how to get the end position though. Below is the modified sample that starts a wma file at the 30 second mark:
IWavePlayer waveOutDevice = new WaveOut();
WaveStream mainOutputStream;
WaveChannel32 volumeStream;
WaveStream wmaReader = new WMAFileReader(#"F:\My Music\The Prodigy\Music for the Jilted Generation\01 Intro.wma");
volumeStream = new WaveChannel32(wmaReader);
mainOutputStream = volumeStream;
mainOutputStream.Skip(30); //start 30 seconds into the file
waveOutDevice.Init(mainOutputStream);
waveOutDevice.Play();
The above sample omits the cleanup code to stop playback and dispose of the streams however. Hope that helps a bit.
Not in the way you want, no.
I assume this is within a WinForms or WPF context. The solution is to host the WMP ActiveX control in your project, then load the WMA file into it and set the Position/Seek property and then play it for a while and stop it when the Timer reaches a certain point. I don't believe the WMP ActiveX control has a timer event, so you'd need to watch it on another thread and stop the playback when it's reached.
It's a hack, but should work. You should be able to get something that "works" within a few hours if you're familiar with hosting ActiveX controls within .NET applications. Note that you'll want to make your application x86-only because of compatibility issues with the 64-bit WMP ActiveX control.
The, much harder, alternative is to work with DirectShow from within your application and create a Render Graph for WMA files and do the manipulation, seeking and playback yourself. DS has a very steep learning curve, expect this to take you at least a few days to even a few weeks if you've never worked with COM before.
I want to create a simple video renderer to play around, and do stuff like creating what would be a mobile OS just for fun. My father told me that in the very first computers, you would edit a specific memory address and the screen would update. I would like to simulate this inside a window in Windows. Is there any way I can do this with C#?
This used to be done because you could get direct access to the video buffer. This is typically not available with today's systems, as the video memory is managed by the video driver and OS. Further, there really isn't a 1:1 mapping of video memory buffer and what is displayed anymore. With so much memory available, it became possible to have multiple buffers and switch between them. The currently displayed buffer is called the "front buffer" and other, non-displayed buffers are called "back buffers" (for more, see https://en.wikipedia.org/wiki/Multiple_buffering). We typically write to back buffers and then have the video system update the front buffer for us. This provides smooth updates, as the video driver synchronizes the update with the scan rate of the monitor.
To write to back buffers using C#, my favorite technique is to use the WPF WritableBitmap. I've also used the System.Drawing.Bitmap to update the screen by writing pixels to it via LockBits.
It's a full featured topic that's outside the scope (it won't fit, not that i won't ramble about it for hours :-) of this answer..but this should get you started with drawing in C#
http://www.geekpedia.com/tutorial50_Drawing-with-Csharp.html
Things have come a bit from the old days of direct memory manipulation..although everything is still tied to pixels.
Edit: Oh, and if you run into flickering problems and get stuck, drop me a line and i'll send you a DoubleBuffered panel to paint with.
It's hard to put this into the title, so let me explain.
I have an application that uses Direct3D to display some mesh and directshow(vmr9 + allocator) to play some video, and then send the video frame as texture to the Direct3D portion to be applied onto the mesh. The application needs to run 24/7. At least it's allowed to be restarted every 24hours but not more frequent than that.
Now the problem is that directshow seems to be giving problem after a few hours of playback, either due to the codec, video driver or video file itself. At which point the application simply refuse playing anymore video. But the Direct3D portion is still running fine, mesh still displayed. Once the application is restarted, everything back to normal.
So, I'm thinking of splitting the 2 parts into 2 different process. So that when ever the video process failed to play video, at least I could restart it immediately, without loosing the Direct3D portion.
So here comes the actual question, whether it's possible to pass the texture from the video player to the direct3d process by passing the pointer, aka retrieve the texture of another process from pointer? My initial guess is not possible due to protected memory addressing.
I have TCP communication setup on both process, and let's not worry about communicating the pointer at this point.
This might be a crazy idea, but it will work wonder of it's ever possible
Yes you can do this with Direct3D 9Ex. This only works with Vista and you must use a Direct3DDevice9Ex. You can read about sharing resources here.
Now the problem is that directshow seems to be giving problem after a few hours of playback, either due to the codec, video driver or video file itself. At which point the application simply refuse playing anymore video.
Why not just fix this bug instead?
If you separate it out as a separate process then I suspect this would not be possible, but if it were a child thread then they would have shared memory addressing I believe.
Passing textures doesn't work.
I'd do it using the following methods:
Replace the VMR with a custom renderer+allocator that places the picture into memory
You allocate memory for pictures from a shared memory pool
Once you receive another picture you signal an event
The Direct3D process waits for this event and updates the mesh with the new texture
Note you'll need to transfer the picture data to the graphics card. The big difference is that this transfer now happens in the Direct3D app and not in the DirectShow app.
You could also try to use the VMR for this. I'm not sure if the custom allocator/renderer parts will allow you to render into shared memory.
Maybe you could use the Sample Grabber in your DirectShow host process to get the image as a system memory buffer. Then you could use WriteProcessMemory to write the data into a pre-agreed address (which you setup over TCP or something) in your Direct3D app.