I have a problem with Audio Units in MonoTouch/Xamarin.
It seems like I can't get a callback on recording, just playback.
I used this example:
https://github.com/xamarin/monotouch-samples/blob/master/AUSoundTriggeredPlayingSoundMemoryBased/ExtAudioBufferPlayer.cs
and looked for Obj C examples. The Obj C examples are pretty much the same like my code so Im a little bit confused about this thing.
The output if running my example is:
INPUT0
Which is the bus number for output.
So the expected output should be:
INPUT1
So my question is: How do I get a recording callback and a playback callback running the same time, or just how do I get a recording callback.
My Code:
void prepareAudioUnit()
{
// AudioSession
AudioSession.Initialize();
AudioSession.Category = AudioSessionCategory.PlayAndRecord;
AudioSession.PreferredHardwareIOBufferDuration = Config.packetLength;
AudioSession.PreferredHardwareSampleRate = Format.samplingRate;
//AudioSession.SetActive (false);
AudioSession.SetActive(true);
Logger.log("HWSR:" + AudioSession.CurrentHardwareSampleRate);
// Getting AudioComponent Remote output
_audioComponent = AudioComponent.FindComponent(AudioTypeOutput.VoiceProcessingIO);
// creating an audio unit instanc
_audioUnit = new AudioUnit(_audioComponent);
// turning on microphone
_audioUnit.SetEnableIO(true,
AudioUnitScopeType.Input,
1 // Remote Input
);
_audioUnit.SetEnableIO(true,
AudioUnitScopeType.Output,
0 // Remote output
);
// setting audio format
_audioUnit.SetAudioFormat(Format.AudioStreamBasicDescription,
AudioUnitScopeType.Output,
1
);
_audioUnit.SetAudioFormat(Format.AudioStreamBasicDescription,
AudioUnitScopeType.Input,
0
);
// setting callback method
_audioUnit.SetRenderCallback(_audioUnit_OutputCallback, AudioUnitScopeType.Global, 0);
_audioUnit.SetRenderCallback(_audioUnit_InputCallback, AudioUnitScopeType.Global, 1);
}
AudioUnitStatus _audioUnit_OutputCallback(AudioUnitRenderActionFlags actionFlags, AudioTimeStamp timeStamp, uint busNumber, uint numberFrames, AudioBuffers data)
{
Logger.log("OUTPUT" + busNumber);
return AudioUnitStatus.NoError;
}
AudioUnitStatus _audioUnit_InputCallback(AudioUnitRenderActionFlags actionFlags, AudioTimeStamp timeStamp, uint busNumber, uint numberFrames, AudioBuffers data)
{
Logger.log("INPUT" + busNumber);
return AudioUnitStatus.NoError;
}
This problem is a bug in Xamarin, they forgot to add a method for InputCallbacks.
I reported the bug but for the people needing the same:
http://nopaste.info/8d0aca98d9.html
Its not good, but it shows how to solve the problem to write a fix yourself till Xamarin updates this.
Related
I seem to have issue to receive correct bytes from a PC to a PIC18F27J53.
The PIC UART is set standard, asynchronous, 8bits, 9600, No parity.
The PC is a win 10, I have made a simple UART program, and sending a few ints, all separated by commas, like the following.
"123,24,65,98,12,45,564,987,321,0,5.9,87,65,789,123,6554,213,8754\n"
I have tried different ways,
Tried to send each char one by one, however the PIC seems to get stuck midway or early in the transfer and the RX flag doesn't go high anymore.
I have tried to send each int followed by "\n" and my PIC to parse each chars and cut the read after a "\n" is found. This seems better, I can get more data in, but the final received data is corrupted: some ints are wrong etc.
It clearly show this is a sync issue, it looks like the PC is too fast for the PIC?
If so, I am looking at having a synchronous uart, however according to the web, this seems to be far from the chosen method, which makes me thing I must have another issue to resolve, in asynchronous mode?
My question, what is the most popular robust way to do that PIC to PC UART full duplex communication?
Here are my PIC receive APIs, fairly standard and simple (I think).
void int_receive_data(void)
{
char input_element[10] = { 0 };
char full_rx[128] = { 0 };
for (int i = 0; i < 22; i++) {
p18f47j53_uart2_read_text(input_element, sizeof(input_element));
strncat(full_rx, input_element, strlen(input_element));
strncat(full_rx, ",", 1);
}
}
void p18f47j53_uart2_read_text(char *output, uint8_t max_length)
{
uint8_t c;
char buffer[64] = { 0 };
for (uint8_t i = 0; i < max_length; i++) {
c = p18f47j53_uart2_receive_u8();
buffer[i] = c;
if ((c == 10) || (c == '\n')) {
buffer[i] = 0;
memcpy(output, buffer, i);
i = max_length;
}
}
}
uint8_t p18f47j53_uart2_receive_u8(void)
{
// wait for the flag
while (!PIR3bits.RC2IF);
// reset receiver if over run error
if (RCSTA2bits.OERR) {
RCSTA2bits.CREN = 0;
RCSTA2bits.CREN = 1;
return PIC_RC_FAIL;
}
// reset if frame error
if (RCSTA2bits.FERR) {
RCSTA2bits.SPEN = 0;
RCSTA2bits.SPEN = 1;
return PIC_RC_FAIL;
}
return RCREG2;
}
On the PC C# side, my sending looks like this
string[] full_separated = full_tx.Split(',');
foreach (string s in full_separated)
my_port.WriteLine(s);
The PIC is running from its internal clock 8MHz.
I never tried the synchronous way as it seems more complicated and 99 percent of the web result will show asynchronous way, which makes me think I better debug what I am doing.
Any idea? advice? Thanks
Well not really a solution, but an alternative. You should break the frame in small chunks. And if possible the receiver to ack with a char to notify the transmitter to go ahead with another chunk.
Reason I am saying that, I have a mikroE dev board with a similar PIC, and while running an "out of the box" example, and sending
"111,222,333,444,555,666,777,888,999"
It looks like the "999" is creating issues, too much byte, maybe buffer issue, maybe the not perfect baud rate mismatch builds up after a few bytes?
Repeat the sending every 50ms, 500ms or 1000ms doesn't make it better.
Changing the baud rate neither.
Only removing ",999" and it seems to work all right.
Without the ",999" I am guessing it is still on the "edge of working", so maybe just remove "666,777,888,999" and the communication should feel more comfortable.
More code, more traffic, but at least it works..
I have been playing around with SharpDX.XAudio2 for a few days now, and while things have been largely positive (the odd software quirk here and there) the following problem has me completely stuck:
I am working in C# .NET using VS2015.
I am trying to play multiple sounds simultaneously.
To do this, I have made:
- Test.cs: Contains main method
- cSoundEngine.cs: Holds XAudio2, MasteringVoice, and sound management methods.
- VoiceChannel.cs: Holds a SourceVoice, and in future any sfx/ related data.
cSoundEngine:
List<VoiceChannel> sourceVoices;
XAudio2 engine;
MasteringVoice master;
public cSoundEngine()
{
engine = new XAudio2();
master = new MasteringVoice(engine);
sourceVoices = new List<VoiceChannel>();
}
public VoiceChannel AddAndPlaySFX(string filepath, double vol, float pan)
{
/**
* Set up and start SourceVoice
*/
NativeFileStream fileStream = new NativeFileStream(filepath, NativeFileMode.Open, NativeFileAccess.Read);
SoundStream soundStream = new SoundStream(fileStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source);
sourceVoices.Add(voice);
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
return voice;
}
Test.cs:
cSoundEngine engine = new cSoundEngine();
total = 6;
for (int i = 0; i < total; i++)
{
string filepath = System.IO.Directory.GetParent(System.IO.Directory.GetCurrentDirectory()).Parent.FullName + #"\Assets\Planet.wav";
VoiceChannel sfx = engine.AddAndPlaySFX(filepath, 0.1, 0);
}
Console.Read(); //Input anything to end play.
There is currently nothing worth showing in VoiceChannel.cs - it holds 'SourceVoice source' which is the one parameter sent in the constructor!
Everything is fine and well running with up to 5 sounds (total = 5). All you hear is the blissful drone of Planet.wav. Any higher than 5 however causes the console to freeze for ~5 seconds, then close (likely a c++ error which debugger can't handle). Sadly no error message for us to look at or anything.
From testing:
- Will not crash as long as you do not have more than 5 running sourcevoices.
- Changing sample rate does not seem to help.
- Setting inputChannels for master object to a different number makes no difference.
- MasteringVoice seems to say the max number of inputvoices is 64.
- Making each sfx play from a different wav file makes no difference.
- Setting the volume for sourcevoices and/or master makes no difference.
From the XAudio2 API Documentation I found this quote: 'XAudio2 removes the 6-channel limit on multichannel sounds, and supports multichannel audio on any multichannel-capable audio card. The card does not need to be hardware-accelerated.'. This is the closest I have come to finding something that mentions this problem.
I am not well experienced with programming sfx and a lot of this is very new to me, so feel free to call me an idiot where appropriate but please try and explain things in layman terms.
Please, if you have any ideas or answers they would be greatly appreciated!
-Josh
As Chuck has suggested, I have created a databank which holds the .wav data, and I just reference the single data store with each buffer. This has improved the sound limit up to 20 - however this has not fixed the problem as a whole, likely because I have not implemented this properly.
Implementation:
class SoundDataBank
{
/**
* Holds a single byte array for each sound
*/
Dictionary<eSFX, Byte[]> bank;
string curdir => Directory.GetParent(Directory.GetCurrentDirectory()).Parent.FullName;
public SoundDataBank()
{
bank = new Dictionary<eSFX, byte[]>();
bank.Add(eSFX.planet, NativeFile.ReadAllBytes(curdir + #"\Assets\Planet.wav"));
bank.Add(eSFX.base1, NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav"));
}
public Byte[] GetSoundData(eSFX sfx)
{
byte[] output = bank[sfx];
return output;
}
}
In SoundEngine we create a SoundBank object (initialised in SoundEngine constructor):
SoundDataBank soundBank;
public VoiceChannel AddAndPlaySFXFromStore(eSFX sfx, double vol)
{
/**
* sourcevoice will be automatically added to MasteringVoice and engine in the constructor.
*/
byte[] buffer = soundBank.GetSoundData(sfx);
MemoryStream memoryStream = new MemoryStream(buffer);
SoundStream soundStream = new SoundStream(memoryStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source, engine, MakeOutputMatrix());
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
sourceVoices.Add(voice);
return voice;
}
Following this implementation now lets me play up to 20 sound effects - but NOT because we are playing from the soundbank. Infact, even running the old method for sound effects now gets up to 20 sfx instances.
This has improved up to 20 because we have done NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav") in the constructor for the SoundBank.
I suspect NativeFile is holding a store of loaded file data, so you regardless of whether you run the original SoundEngine.AddAndPlaySFX() or SoundEngine.AddAndPlaySFXFromStore(), they are both running from memory?
Either way, this has quadrupled the limit from before, so this has been incredibly useful - but requires further work.
I want to analyse my default playback device and detect the beats. I've been using the BASS WASAPI to get the FFT data of the selected device with:
int ret = BassWasapi.BASS_WASAPI_GetData(_fft, (int)BASSData.BASS_DATA_FFT2048);
Now I was using the data to generate spectrum data and display this to the user. In addition I want to detect the Beats using the BPMCounter Class from BASS. However as far as I can tell the BPMCounter.ProcessAudio() function requires a stream (which I don't get with WASAPI) in order to work. Is there a ways I can use BPMCounter with WASAPI? Would be great if someone can point me to the right direction. Thanks
Edit:
Tried this to convert the data to a stream, but without success:
int ret = BassWasapi.BASS_WASAPI_GetData(_fft, (int)BASSData.BASS_DATA_FFT2048); //get channel fft data
var chan = Bass.BASS_StreamCreate(0, 44100, BASSFlag.BASS_DEFAULT, BASSStreamProc.STREAMPROC_PUSH);
Bass.BASS_ChannelPlay(chan, false);
Bass.BASS_StreamPutData(chan, _fft, _fft.Length);
bool beat = _count.ProcessAudio(chan, true);
Debug.Write(beat);
beat is always False, however I can see at the Spectrum that the capturing of the FFT Data is corrent.
I've just started playing with this lib a few hours ago and i am still going through the examples. So my answer maybe is not what you want. For my project i also want to transform WASAPI into a stream and use it for a displaying a spectrum. What i did was to create a StreamPush, right after BASS_WASAPI initialization.
To init your WASAPI use this call and this delegate:
private InitWasapi()
{
WASAPIPROC _process = new WASAPIPROC(Process); // Delegate
bool res = BassWasapi.BASS_WASAPI_Init(_YourDeviceNumber, 0, 0, BASSWASAPIInit.BASS_WASAPI_BUFFER, 1f, 0f, _process, IntPtr.Zero);
if (!res)
{
// Do error checking
}
// This is the part you are looking for (maybe!)
// Use these flags because Wasapi needs 32-bit sample data
var info = BassWasapi.BASS_WASAPI_GetInfo();
_stream = Bass.BASS_StreamCreatePush(info.freq, info.chans, BASSFlag.BASS_STREAM_DECODE | BASSFlag.BASS_SAMPLE_FLOAT, IntPtr.Zero);
BassWasapi.BASS_WASAPI_Start();
}
private int Process(IntPtr buffer, int length, IntPtr user)
{
Bass.BASS_StreamPutData(_stream, buffer, length);
return length;
}
Please note: This works, but i am still experimenting. For example i am not getting the same spectrum output as when i create the stream from the music file itself. There are some (small) differences. Maybe it's because i am using a custom EQ in Winamp for playing the same .mp3. So if anyone knows more on this subject, i would like also to hear it!
Question: What is the best way to programmatically disconnect and reconnect displays programmatically?
The Goal: Kill the video output (black screen with no backlight) on a display and later turn it back on. Imagine unplugging the video cord from the monitor, then plugging it back in.
My Attempt:
// Get the monitor to disable
uint iDevNum = 0;
DISPLAY_DEVICE displayDevice = new DISPLAY_DEVICE();
displayDevice.cb = Marshal.SizeOf(displayDevice);
EnumDisplayDevices(null, iDevNum, ref displayDevice, 0))
DEVMODE devMode = new DEVMODE();
EnumDisplaySettings(displayDevice.DeviceName, 0, ref devMode);
//
// Do something here to disable this display device!
//
// Save the display settings
ChangeDisplaySettingsEx(displayDevice.DeviceName, ref devMode,
IntPtr.Zero, ChangeDisplaySettingsFlags.CDS_NONE, IntPtr.Zero);
I can interact with each display, but I can't figure out how to disconnect one.
It is similar to "Disconnect this display" in the Screen Resolution properties in Windows 7:
Notes:
Turning off video output on all displays won't work because I need the other monitors to stay on.
The desktop area on the "dead" display does NOT need to be usable when it is off. Also, it is fine if windows move around.
References:
SO: Enabling a Second Monitor
How to Turn Off a Monitor
1) Get MultiMonitorHelper from here:
https://github.com/ChrisEelmaa/MultiMonitorHelper/tree/master
2) Extend Win7Display to disconnect the display:
using MultiMonitorHelper.DisplayModels.Win7.Enum;
using MultiMonitorHelper.DisplayModels.Win7.Struct;
/// <summary>
/// Disconnect a display.
/// </summary>
public void DisconnectDisplay(int displayNumber)
{
// Get the necessary display information
int numPathArrayElements = -1;
int numModeInfoArrayElements = -1;
StatusCode error = CCDWrapper.GetDisplayConfigBufferSizes(
QueryDisplayFlags.OnlyActivePaths,
out numPathArrayElements,
out numModeInfoArrayElements);
DisplayConfigPathInfo[] pathInfoArray = new DisplayConfigPathInfo[numPathArrayElements];
DisplayConfigModeInfo[] modeInfoArray = new DisplayConfigModeInfo[numModeInfoArrayElements];
error = CCDWrapper.QueryDisplayConfig(
QueryDisplayFlags.OnlyActivePaths,
ref numPathArrayElements,
pathInfoArray,
ref numModeInfoArrayElements,
modeInfoArray,
IntPtr.Zero);
if (error != StatusCode.Success)
{
// QueryDisplayConfig failed
}
// Check the index
if (pathInfoArray[displayNumber].sourceInfo.modeInfoIdx < modeInfoArray.Length)
{
// Disable and reset the display configuration
pathInfoArray[displayNumber].flags = DisplayConfigFlags.Zero;
error = CCDWrapper.SetDisplayConfig(
pathInfoArray.Length,
pathInfoArray,
modeInfoArray.Length,
modeInfoArray,
(SdcFlags.Apply | SdcFlags.AllowChanges | SdcFlags.UseSuppliedDisplayConfig));
if (error != StatusCode.Success)
{
// SetDisplayConfig failed
}
}
}
3) Extend Win7Display to reconnect the display using an answer from this post:
using System.Diagnostics;
/// <summary>
/// Reconnect all displays.
/// </summary>
public void ReconnectDisplays()
{
DisplayChanger.Start();
}
private static Process DisplayChanger = new Process
{
StartInfo =
{
CreateNoWindow = true,
WindowStyle = ProcessWindowStyle.Hidden,
FileName = "DisplaySwitch.exe",
Arguments = "/extend"
}
};
4) Update the methods in IDisplay.
5) Implement the methods:
IDisplayModel displayModel = DisplayFactory.GetDisplayModel();
List<IDisplay> displayList = displayModel.GetActiveDisplays().ToList();
displayList[0].DisconnectDisplay(0);
displayList[0].ReconnectDisplays();
There's a github project that I STILL haven't got around, but it's a starting point. You need to use Win7 specific API in order to change settings. ChangeDisplaySettings won't work.
Have a look: https://github.com/ChrisEelmaa/MultiMonitorHelper
This is what you need to do:
update the IDisplay interface to support TurnOff() method,
and then call it:
var displayModel = DisplayFactory.GetDisplayModel();
var displayList = displayModel.GetActiveDisplays().ToList();
var firstDisplay = displayList[0].TurnOff();
How to implement TurnOff()? I WOULD imagine this is how(I might be wrong here now):
You need to break the connection between GPU & monitor through breaking the "paths". You can break path between source and target like this:
Call SetDisplayConfig() and pass inside specific paths and make sure you map out the DISPLAYCONFIG_PATH_ACTIVE from the DISPLAY_PATH_INFO structure flags integer.
http://msdn.microsoft.com/en-us/library/windows/hardware/ff553945(v=vs.85).aspx
Sorry for not being more helpful, but this is pretty hardcore stuff, it took me quite a while to even understand the basics of that API. It's a starting point :-),
take a look at the example how to rotate specific monitor in Win7: How do I set the monitor orientation in Windows 7?
In all honesty, just wrap the DisplaySwitch.exe for Win7, and pass /internal or /external(depending if you want to disable/enable first/second monitor), this might or might not work for >2 monitors.
I have been using the latest version of the WPFMediaKit. What I am trying to do is write a sample application that will use the Samplegrabber to capture the video frames of video files so I can have them as individual Bitmaps.
So far, I have had good luck with the following code when constructing and rendering my graph. However, when I use this code to play back a .wmv video file, when the samplegrabber is attached, it will play back jumpy or choppy. If I comment out the line where I add the samplegrabber filter, it works fine. Again, it works with the samplegrabber correctly with AVI/MPEG, etc.
protected virtual void OpenSource()
{
FrameCount = 0;
/* Make sure we clean up any remaining mess */
FreeResources();
if (m_sourceUri == null)
return;
string fileSource = m_sourceUri.OriginalString;
if (string.IsNullOrEmpty(fileSource))
return;
try
{
/* Creates the GraphBuilder COM object */
m_graph = new FilterGraphNoThread() as IGraphBuilder;
if (m_graph == null)
throw new Exception("Could not create a graph");
/* Add our prefered audio renderer */
InsertAudioRenderer(AudioRenderer);
var filterGraph = m_graph as IFilterGraph2;
if (filterGraph == null)
throw new Exception("Could not QueryInterface for the IFilterGraph2");
IBaseFilter renderer = CreateVideoMixingRenderer9(m_graph, 1);
IBaseFilter sourceFilter;
/* Have DirectShow find the correct source filter for the Uri */
var hr = filterGraph.AddSourceFilter(fileSource, fileSource, out sourceFilter);
DsError.ThrowExceptionForHR(hr);
/* We will want to enum all the pins on the source filter */
IEnumPins pinEnum;
hr = sourceFilter.EnumPins(out pinEnum);
DsError.ThrowExceptionForHR(hr);
IntPtr fetched = IntPtr.Zero;
IPin[] pins = { null };
/* Counter for how many pins successfully rendered */
int pinsRendered = 0;
m_sampleGrabber = (ISampleGrabber)new SampleGrabber();
SetupSampleGrabber(m_sampleGrabber);
hr = m_graph.AddFilter(m_sampleGrabber as IBaseFilter, "SampleGrabber");
DsError.ThrowExceptionForHR(hr);
/* Loop over each pin of the source filter */
while (pinEnum.Next(pins.Length, pins, fetched) == 0)
{
if (filterGraph.RenderEx(pins[0],
AMRenderExFlags.RenderToExistingRenderers,
IntPtr.Zero) >= 0)
pinsRendered++;
Marshal.ReleaseComObject(pins[0]);
}
Marshal.ReleaseComObject(pinEnum);
Marshal.ReleaseComObject(sourceFilter);
if (pinsRendered == 0)
throw new Exception("Could not render any streams from the source Uri");
/* Configure the graph in the base class */
SetupFilterGraph(m_graph);
HasVideo = true;
/* Sets the NaturalVideoWidth/Height */
//SetNativePixelSizes(renderer);
}
catch (Exception ex)
{
/* This exection will happen usually if the media does
* not exist or could not open due to not having the
* proper filters installed */
FreeResources();
/* Fire our failed event */
InvokeMediaFailed(new MediaFailedEventArgs(ex.Message, ex));
}
InvokeMediaOpened();
}
And:
private void SetupSampleGrabber(ISampleGrabber sampleGrabber)
{
FrameCount = 0;
var mediaType = new AMMediaType
{
majorType = MediaType.Video,
subType = MediaSubType.RGB24,
formatType = FormatType.VideoInfo
};
int hr = sampleGrabber.SetMediaType(mediaType);
DsUtils.FreeAMMediaType(mediaType);
DsError.ThrowExceptionForHR(hr);
hr = sampleGrabber.SetCallback(this, 0);
DsError.ThrowExceptionForHR(hr);
}
I have read a few things saying the the .wmv or .asf formats are asynchronous or something. I have attempted inserting a WMAsfReader to decode which works, but once it goes to the VMR9 it gives the same behavior. Also, I have gotten it to work correctly when I comment out the IBaseFilter renderer = CreateVideoMixingRenderer9(m_graph, 1); line and have filterGraph.Render(pins[0]); -- the only drawback is that now it renders in an Activemovie widow of its own instead of my control, however the samplegrabber functions correctly and without any skipping. So I am thinking the bug is in the VMR9 / samplegrabbing somewhere.
Any help? I am new to this.
Some decoders will use hardware acceleration using DXVA. This is implemented by negotiating a partly-decoded format, and passing this partly decoded data to the renderer to complete decoding and render. If you insert a sample grabber configured to RGB24 between the decoder and the renderer, you will disable hardware acceleration.
That, I'm sure, is the crux of the problem. The details are still a little vague, I'm afraid, such as why it works when you use the default VMR-7, but fails when you use VMR-9. I would guess that the decoder is trying to use dxva and failing, in the vmr-9 case, but has a reasonable software-only backup that works well in vmr-7.
I'm not familiar with the WPFMediaKit, but I would think the simplest solution is to replace the explicit vmr-9 creation with an explicit vmr-7 creation. That is, if the decoder works software-only with vmr-7, then use that and concentrate on fixing the window-reparenting issue.
It ended up the the code I have posted (which in itself was fairly shamelessly slightly modified by me from Jeremiah Morrill's WPFMediakit source code) was in fact adequate to render the .WMV files and be sample grabbed.
It seems the choppiness has something to do with running through the VS debugger, or VS2008 itself. After messing around with the XAML for a while in the visual editor, then running the app, I will have this choppy behavior introduced. Shutting down VS2008 seems to remedy it. :P
So not much of an answer, but at least a (annoying - restarting VS2008) fix when it crops up.