We have a rich client application developed using WPF/C#.Net 4.0 which interops with in-house COM DLLs. Regular events are raised via this COM interface containing video data.
As part of the application we render video via Windows Media Foundation and have created interops to use Window Media Foundation. We have multiple WMF pipelines rendering different video at the same time.
The application runs for 6-8 hours rendering video. Private bytes remaining consistently steady during this time (say around 500-600MB).
At some point the application appears to hang, at this point private bytes increases very rapidly until the process consumes approximately 1.4GB of memory and crashes with an OutOfMemoryException.
We have reproduced this on 5 different workstations with different graphic cards (NVIDIA and ATI cards) and a mixture of Windows 7 32 and 64bit.
We have analyzed 3 dump files and found that the finalizer thread is waiting on a call to the ole32.GetToSTA() method. We are unable to determine what causes the finalizer thread to block and how to resolve this. I have pasted excerpts from three dumps we've been analyzing:
Dump 1)
Thread 2:ae0 is waiting on an STA thread efc
Thread 28:efc is calling a WaitForSingleObject. The handle it is waiting on is actually a thread handle 5ab4 which is thread id 14a4
Thread 130:14a4 has the following stack:
37f4fdf4 753776a6 ntdll!NtRemoveIoCompletion+0x15
37f4fe20 63301743 KERNELBASE!GetQueuedCompletionStatus+0x29
37f4fe74 6330d0db WMNetMgr!CNSIoCompletionPortNT::WaitAndServeCompletionsLoop+0x5e
37f4fe94 633199bf WMNetMgr!CNSIoCompletionPortNT::WaitAndServeCompletions+0x4c
37f4fecc 63312dbd WMNetMgr!CWorkThreadManager::CWorkerThread::ThreadMain+0xa2
37f4fed8 769b3677 WMNetMgr!CWMThread::ThreadFunc+0x3b
37f4fee4 77679f42 kernel32!BaseThreadInitThunk+0xe
37f4ff24 77679f15 ntdll!__RtlUserThreadStart+0x70
37f4ff3c 00000000 ntdll!_RtlUserThreadStart+0x1b
Dump2)
STA thread:
1127f474 75f80a91 ntdll!ZwWaitForSingleObject+0x15
1127f4e0 77411184 KERNELBASE!WaitForSingleObjectEx+0x98
1127f4f8 77411138 kernel32!WaitForSingleObjectExImplementation+0x75
1127f50c 63ae5f29 kernel32!WaitForSingleObject+0x12
1127f530 63a8eb2e WMNetMgr!CWMThread::Wait+0x78
1127f54c 63a8f128 WMNetMgr!CWorkThreadManager::CThreadPool::Shutdown+0x70
1127f568 63a76e10 WMNetMgr!CWorkThreadManager::Shutdown+0x34
1127f59c 63a76f2d WMNetMgr!CNSClientNetManagerHelper::Shutdown+0xdd
1127f5a4 63cd228e WMNetMgr!CNSClientNetManager::Shutdown+0x66
WARNING: Stack unwind information not available. Following frames may be wrong.
1127f5bc 63cd23a6 WMVCORE!WMCreateProfileManager+0xeef6
1127f5dc 63c573ca WMVCORE!WMCreateProfileManager+0xf00e
1127f5e8 63c62f18 WMVCORE!WMIsAvailableOffline+0x2ba3b
1127f618 63c19da6 WMVCORE!WMIsAvailableOffline+0x37589
1127f630 63c1aca2 WMVCORE!WMIsContentProtected+0x56e4
1127f63c 63c14bd7 WMVCORE!WMIsContentProtected+0x65e0
1127f650 113de6e8 WMVCORE!WMIsContentProtected+0x515
1127f660 113de513 wmp!CWMDRMReaderStub::CExternalStub::ShutdownInternalRefs+0x1d0
1127f674 113c1988 wmp!CWMDRMReaderStub::ExternalRelease+0x4f
1127f67c 1160a5b9 wmp!CWMDRMReaderStub::CExternalStub::Release+0x13
1127f6a4 1161745f wmp!CWMGraph::CleanupUpStream_selfprotected+0xbe
Finalizer thread is trying to switch to STA:
0126eccc 75f80a91 ntdll!ZwWaitForSingleObject+0x15
0126ed38 77411184 KERNELBASE!WaitForSingleObjectEx+0x98
0126ed50 77411138 kernel32!WaitForSingleObjectExImplementation+0x75
0126ed64 75d78907 kernel32!WaitForSingleObject+0x12
0126ed88 75e9a819 ole32!GetToSTA+0xad
Dump3)
The finalizer thread is in the GetToSTA call, so it is waiting for a COM object to free
Thread 29 is a COM object in the STA, and it is waiting on a critical section owned by thread 53 (1bf4)
Thread 53 is doing:
1cbcf990 76310a91 ntdll!ZwWaitForSingleObject+0x15
1cbcf9fc 74cb1184 KERNELBASE!WaitForSingleObjectEx+0x98
1cbcfa14 74cb1138 kernel32!WaitForSingleObjectExImplementation+0x75
1cbcfa28 65dfb6bb kernel32!WaitForSingleObject+0x12
WARNING: Stack unwind information not available. Following frames may be wrong.
1cbcfa48 74cb3677 wmp!Ordinal3000+0x53280
1cbcfa54 77029f42 kernel32!BaseThreadInitThunk+0xe
1cbcfa94 77029f15 ntdll!__RtlUserThreadStart+0x701cbcfaac 00000000 ntdll!_RtlUserThreadStart+0x1b
Any ideas on how we might resolve this issue?
Well, the finalizer thread is deadlocked. That will certainly result in an eventual OOM. We can't see the full stack trace for the finalizer thread but some odds that you'll see SwitchAptAndDispatchCall() and ReleaseRCWListInCorrectCtx() in the trace, indicating that it is trying to call IUnknown::Release() to release a COM object. And that object is apartment threaded so a thread switch is required to safely make the call.
I don't see any decent candidates in the stack traces you posted, possibly because you didn't get the right one or the thread is already busy shutting down due to the exception. Try to catch it earlier with a debugger break as soon as you see the virtual memory size climb.
The most common cause for a deadlock like this is violating the requirements for an STA thread. Which state that it must never block and must pump a message loop. The never-block requirement is typically easily met in a .NET program, the CLR will pump a message loop when necessary when you use the lock statement or a WaitHandle.WaitXxx() call. It is however very common to forget to pump a message loop, especially since doing so is kinda painful. Application.Run() is required.
Related
I'm having a hard time with some code I have that apparently struggles when called from the second window created by the ShareTarget contract (when you share something to the app, it opens in a small standalone window).
This is my code so far:
// Blur and resize the image to get the average HSL color
// Assume that stream is an IRandomAccessStream pointing to valid image data
HslColor hslMean;
using (RandomAccessStreamImageSource imageProvider = new RandomAccessStreamImageSource(stream))
using (BlurEffect blurEffect = new BlurEffect(imageProvider) { KernelSize = 256 })
{
Color mean = await DispatcherHelper.GetFromUIThreadAsync(async () =>
{
WriteableBitmap
blurred = new WriteableBitmap((int)decoder.PixelWidth, (int)decoder.PixelHeight),
result = await blurEffect.GetBitmapAsync(blurred, OutputOption.Stretch),
resized = result.Resize(1, 1, WriteableBitmapExtensions.Interpolation.Bilinear);
return resized.GetPixel(0, 0);
});
hslMean = mean.ToHsl();
}
Note: that DispatcherHelper.GetFromUIThreadAsync method just checks the thread access to the UI thread, and if needed it schedules the code to a CoreDispatcher object that was obtained with CoreApplication.MainView.CoreWindow.Dispatcher.
Problem: this code works 100% fine if my app is already open, as at that point that CoreDispatcher object has already been created by previous calls to that DispatcherHelper class, so the method just uses the stored dispatcher to schedule the work and it works fine. But, if the app is closed when the ShareTarget window is opened (so that DispatcherHelper has to create the dispatcher for the first time) the CoreApplication.MainView.CoreWindow line throws an exception. A very weird one:
COMException:
A COM call to an ASTA was blocked because the call chain originated in or passed through another ASTA. This call pattern is deadlock-prone and disallowed by apartment call control.
A COM call (IID: {638BB2DB-451D-4661-B099-414F34FFB9F1}, method index: 6) to an ASTA (thread 10276) was blocked because the call chain originated in or passed through another ASTA (thread 4112). This call pattern is deadlock-prone and disallowed by apartment call control.
So, I needed a way to make that method reliable even when being called from different windows. I've tried different options:
#1: Just invoking that code without dispatching to a different thread, as in theory I should be on the UI thread at this point ---> FAIL (The application called an interface that was marshalled for a different thread. (Exception from HRESULT: 0x8001010E (RPC_E_WRONG_THREAD)))
#2: Manually calling CoreApplication.MainView.CoreWindow.Dispatcher to dispatch that code block ---> FAIL (I get that weird COMException mentioned above)
#3: Manually using CoreApplication.MainView.Dispatcher to dispatch the code block (as it was the .CoreWindow part that spawned the exception) ---> FAIL (COMException: item not found)
#4: Using CoreApplication.GetCurrentView().CoreWindow.Dispatcher, CoreApplication.GetCurrentView().Dispatcher, Window.Current.CoreWindow.Dispatcher and Window.Current.Content.Dispatcher to schedule that code ---> FAIL (wrong thread again, I get the usual marshalling exception)
All these marshalling exception are thrown at the line result = await blurEffect.GetBitmapAsync(blurred, OutputOption.Stretch), so I suspect it might be something related to the Lumia Imaging SDK.
I mean, I'm quite sure that I am in fact on the UI thread, or otherwise I wouldn't have managed to create an instance of the WriteableBitmap class, right?
Why is it that I can create WriteableBitmap objects (and they need to be created on the UI thread as far as I know), but that GetBitmapAsync method from the Lumia SDK always throws that marshalling exception? I'm using it everywhere in my app without any problems, why is it that it just won't work from the ShareTarget window? Is there anything I need to do?
Thanks for your help!
Looks like this is a bug in the Lumia Imaging SDK (that was originally written for WP8.1, which didn't have multiple windows/dispatchers), so unless the call to the library is made from the dispatcher associated with the main app window (which of course can only be retrieved if the app is open in the background when the ShareTarget window pops up), it will just fail.
The only solution at this point is to replace that call to the Lumia SDK with some other code that doesn't rely on that particular library (in this case for example, it is possible to just get the ARGB array from the WriteableBitmap object and calculate the mean color manually).
I'm investigating a handle leak in a WCF service, running on .NET 4.6.2. The service runs fine but over time handle count keeps increasing, with thousands of Token type handles sitting around in the process. It seems memory is also leaking very slowly (likely related to the handle leak).
Edit: it looks like event and thread handles are also leaked.
Process Explorer shows that the suspect handles all have the same name:
DOMAIN\some.username$:183db90
and all share the same address.
I attached WinDbg to the process and ran !htrace -enable and then !htrace -diff some time later. This gave me a list of almost 2000 newly opened handles and native stack traces, like this one:
Handle = 0x000000000000b02c - OPEN
Thread ID = 0x000000000000484c, Process ID = 0x0000000000002cdc
0x00007ffc66e80b3a: ntdll!NtCreateEvent+0x000000000000000a
0x00007ffc64272ce8: KERNELBASE!CreateEventW+0x0000000000000084
0x00007ffc5b392e0a: clr!CLREventBase::CreateManualEvent+0x000000000000003a
0x00007ffc5b3935c7: clr!Thread::AllocHandles+0x000000000000007b
0x00007ffc5b3943c7: clr!Thread::CreateNewOSThread+0x000000000000007f
0x00007ffc5b394308: clr!Thread::CreateNewThread+0x0000000000000090
0x00007ffc5b394afb: clr!ThreadpoolMgr::CreateUnimpersonatedThread+0x00000000000000cb
0x00007ffc5b394baf: clr!ThreadpoolMgr::MaybeAddWorkingWorker+0x000000000000010c
0x00007ffc5b1d8c74: clr!ManagedPerAppDomainTPCount::SetAppDomainRequestsActive+0x0000000000000024
0x00007ffc5b1d8d27: clr!ThreadpoolMgr::SetAppDomainRequestsActive+0x000000000000003f
0x00007ffc5b1d8cae: clr!ThreadPoolNative::RequestWorkerThread+0x000000000000002f
0x00007ffc5a019028: mscorlib_ni+0x0000000000549028
0x00007ffc59f5f48f: mscorlib_ni+0x000000000048f48f
0x00007ffc59f5f3b9: mscorlib_ni+0x000000000048f3b9
Another stack trace (a large portion of the ~2000 new handles have this):
Handle = 0x000000000000a0c8 - OPEN
Thread ID = 0x0000000000003614, Process ID = 0x0000000000002cdc
0x00007ffc66e817aa: ntdll!NtOpenProcessToken+0x000000000000000a
0x00007ffc64272eba: KERNELBASE!OpenProcessToken+0x000000000000000a
0x00007ffc5a01aa9b: mscorlib_ni+0x000000000054aa9b
0x00007ffc5a002ebd: mscorlib_ni+0x0000000000532ebd
0x00007ffc5a002e68: mscorlib_ni+0x0000000000532e68
0x00007ffc5a002d40: mscorlib_ni+0x0000000000532d40
0x00007ffc5a0027c7: mscorlib_ni+0x00000000005327c7
0x00007ffbfbfb3d6a: +0x00007ffbfbfb3d6a
When I run the !handle 0 0 command in WinDbg, I get the following result:
21046 Handles
Type Count
None 4
Event 2635 **
Section 360
File 408
Directory 4
Mutant 9
Semaphore 121
Key 77
Token 16803 **
Thread 554 **
IoCompletion 8
Timer 3
TpWorkerFactory 2
ALPC Port 7
WaitCompletionPacket 51
The ones marked with ** are increasing over time, although at a different rate.
Edit 3:
I ran !dumpheap to see the number of Thread objects (unrelated classes removed):
!DumpHeap -stat -type System.Threading.Thread
Statistics:
MT Count TotalSize Class Name
00007ffc5a152bb0 745 71520 System.Threading.Thread
Active thread count fluctuates between 56 and 62 in Process Explorer as the process is handling some background tasks periodically.
Some of the stack traces are different but they're all native traces so I don't know what managed code triggered the handle creation. Is it possible to get the managed function call that's running on the newly created thread when Thread::CreateNewThread is called? I don't know what WinDbg command I'd use for this.
Note:
I cannot attach sample code to the question because the WCF service loads hundreds of DLL-s, most of which are built from many source files - I have some very vague suspicions in what major area this may come from but I don't know any details to show an MCVE.
Edit: after Harry Johnston's comments below, I noticed that thread handle count is also increasing - I overlooked this earlier because of the high number of token handles.
Here is one simplified example I found that really hard to debug deadlock in awaited tasks in some cases:
class Program
{
static void Main(string[] args)
{
var task = Hang();
task.Wait();
}
static async Task Hang()
{
var tcs = new TaskCompletionSource<object>();
// do some more stuff. e.g. another await Task.FromResult(0);
await tcs.Task;
tcs.SetResult(0);
}
}
This example is easy to understand why it deadlocks, it is awaiting a task which is finished later on. It looks stupid but similar scenario could happen in more complicated production code and deadlocks could be mistakenly introduced due to lack of multithreading experience.
Interesting thing for this example is inside Hang method there is no thread blocking code like Task.Wait() or Task.Result. Then when I attach VS debugger, it just shows the main thread is waiting for the task to finish. However, there is no thread showing where the code has stopped inside Hang method using Parallel Stacks view.
Here are the call stacks on each thread (3 in all) I have in the Parallel Stacks:
Thead 1:
[Managed to Native Transition]
Microsoft.VisualStudio.HostingProcess.HostProc.WaitForThreadExit
Microsoft.VisualStudio.HostingProcess.HostProc.RunParkingWindowThread
System.Threading.ThreadHelper.ThreadStart_Context
System.Threading.ExecutionContext.RunInternal
System.Threading.ExecutionContext.Run
System.Threading.ExecutionContext.Run
System.Threading.ThreadHelper.ThreadStart
Thread 2:
[Managed to Native Transition]
Microsoft.Win32.SystemEvents.WindowThreadProc
System.Threading.ThreadHelper.ThreadStart_Context
System.Threading.ExecutionContext.RunInternal
System.Threading.ExecutionContext.Run
System.Threading.ExecutionContext.Run
System.Threading.ThreadHelper.ThreadStart
Main Thread:
System.Threading.Monitor.Wait
System.Threading.Monitor.Wait
System.Threading.ManualResetEventSlim.Wait
System.Threading.Tasks.Task.SpinThenBlockingWait
System.Threading.Tasks.Task.InternalWait
System.Threading.Tasks.Task.Wait
System.Threading.Tasks.Task.Wait
ConsoleApplication.Program.Main Line 12 //this is our Main function
[Native to Managed Transition]
[Managed to Native Transition]
System.AppDomain.ExecuteAssembly
Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly
System.Threading.ThreadHelper.ThreadStart_Context
System.Threading.ExecutionContext.RunInternal
System.Threading.ExecutionContext.Run
System.Threading.ExecutionContext.Run
System.Threading.ThreadHelper.ThreadStart
Is there anyway to find out where the task has stopped inside Hang method? And the call stack if possible? I believe internally there must be some states about each task and their continuation points so that the scheduler could work. But I don't know how to check that out.
Inside of Visual Studio, I am not aware of a way to debug this sort of situation simply. However, there are two other ways to visualize this for full framework applications, plus a bonus preview of a way to do this in .NET Core 3.
tldr version: Yep, its hard, and Yep, the information you want is there, just difficult to find. Once you find the heap objects as per the below methods, you can use the address of them in the VS watch window to use the visualizers to take a deeper dive.
WinDbg
WinDbg has a primitive but helpful extension that provides a !dumpasync command.
If you download the extension from the vs-threading release branch and copy the x64 and x86 AsyncDebugTools.dll to C:\Program Files (x86)\Windows Kits\10\Debuggers\[x86|x64]\winext folders, you can do the following:
.load AsyncDebugTools
!dumpasync
The output (taken from the link above) looks like:
07494c7c <0> Microsoft.Cascade.Rpc.RpcSession+<SendRequestAsync>d__49
.07491d10 <1> Microsoft.Cascade.Agent.WorkspaceService+<JoinRemoteWorkspaceAsync>d__28
..073c8be4 <5> Microsoft.Cascade.Agent.WorkspaceService+<JoinWorkspaceAsync>d__22
...073b7e94 <0> Microsoft.Cascade.Rpc.RpcDispatcher`1+<>c__DisplayClass23_2+<<BuildMethodMap>b__2>d[[Microsoft.Cascade.Contracts.IWorkspaceService, Microsoft.Cascade.Common]]
....073b60e0 <0> Microsoft.Cascade.Rpc.RpcServiceUtil+<RequestAsync>d__3
.....073b366c <0> Microsoft.Cascade.Rpc.RpcSession+<ReceiveRequestAsync>d__42
......073b815c <0> Microsoft.Cascade.Rpc.RpcSession+<>c__DisplayClass40_1+<<Receive>b__0>d
On your sample above the output is less interesting:
033a23c8 <0> StackOverflow41476418.Program+<Hang>d__1
The description of the output is:
The output above is a set of stacks – not exactly callstacks, but actually "continuation stacks". A continuation stack is synthesized based on what code has 'awaited' the call to an async method. It's possible that the Task returned by an async method was awaited from multiple places (e.g. the Task was stored in a field, then awaited by multiple interested parties). When there are multiple awaiters, the stack can branch and show multiple descendents of a given frame. The stacks above are therefore actually "trees", and the leading dots at each frame helps recognize when trees have multiple branches.
If an async method is invoked but not awaited on, the caller won't appear in the continuation stack.
Once you see the nested hierarchy for more complex situations, you can at least deep-dive into the state objects and find their continuations and roots.
LinqPad and ClrMd
Another useful too is LinqPad coupled with ClrMd and ClrMD.Extensions. The latter package is used to bridge ClrMd into LINQPad - there is a getting started guide. Once you have the packages/namespaces set, this query is what you want:
var session = ClrMD.Extensions.ClrMDSession.LoadCrashDump(#"dmpfile.dmp");
var stateMachineTypes = (
from type in session.Heap.EnumerateTypes()
where type.Interfaces.Any(item => item.Name == "System.Runtime.CompilerServices.IAsyncStateMachine")
select type);
session.Heap.EnumerateDynamicObjects(stateMachineTypes).Dump(2);
Below is a sample of the output running on your sample code:
DotNet Core 3
For .NET Core 3.x they added !dumpasync into the WinDbg sos extension. It is MUCH better than the extension described above, as it gives much more context. You can see it is part of a much larger user story to improve debugging of async code. Here is the output from that under .NET Core 3.0 preview 6 with a preview 7 version of SOS with extended options. Note that line numbers are present, something you don't get with the above options.:
0:000> !dumpasync -stacks -roots
Statistics:
MT Count TotalSize Class Name
00007ffb564e9be0 1 96 System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1+AsyncStateMachineBox`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib],[StackOverflow41476418_Core.Program+<Hang>d__1, StackOverflow41476418_Core]]
Total 1 objects
In 1 chains.
Address MT Size State Description
00000209915d21a8 00007ffb564e9be0 96 0 StackOverflow41476418_Core.Program+<Hang>d__1
Async "stack":
.00000209915d2738 System.Threading.Tasks.Task+SetOnInvokeMres
GC roots:
Thread bc20:
000000e08057e8c0 00007ffbb580a292 System.Threading.Tasks.Task.SpinThenBlockingWait(Int32, System.Threading.CancellationToken) [/_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs # 2939]
rbp+10: 000000e08057e930
-> 00000209915d21a8 System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1+AsyncStateMachineBox`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib],[StackOverflow41476418_Core.Program+<Hang>d__1, StackOverflow41476418_Core]]
000000e08057e930 00007ffbb580a093 System.Threading.Tasks.Task.InternalWaitCore(Int32, System.Threading.CancellationToken) [/_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs # 2878]
rsi:
-> 00000209915d21a8 System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1+AsyncStateMachineBox`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib],[StackOverflow41476418_Core.Program+<Hang>d__1, StackOverflow41476418_Core]]
000000e08057e9b0 00007ffbb5809f0a System.Threading.Tasks.Task.Wait(Int32, System.Threading.CancellationToken) [/_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs # 2789]
rsi:
-> 00000209915d21a8 System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1+AsyncStateMachineBox`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib],[StackOverflow41476418_Core.Program+<Hang>d__1, StackOverflow41476418_Core]]
Windows symbol path parsing FAILED
000000e08057ea10 00007ffb56421f17 StackOverflow41476418_Core.Program.Main(System.String[]) [C:\StackOverflow41476418_Core\Program.cs # 12]
rbp+28: 000000e08057ea38
-> 00000209915d21a8 System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1+AsyncStateMachineBox`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib],[StackOverflow41476418_Core.Program+<Hang>d__1, StackOverflow41476418_Core]]
000000e08057ea10 00007ffb56421f17 StackOverflow41476418_Core.Program.Main(System.String[]) [C:\StackOverflow41476418_Core\Program.cs # 12]
rbp+30: 000000e08057ea40
-> 00000209915d21a8 System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1+AsyncStateMachineBox`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib],[StackOverflow41476418_Core.Program+<Hang>d__1, StackOverflow41476418_Core]]
I have a webcam control using DirectShow.NET. I've created a custom control to display video and capture an image from a webcam. I'm using that custom control in another WPF window. I have a function public Bitmap CaptureImage() in the custom control to abstract away a little bit of the DirectShow programming and simply return a Bitmap. Since the images are relatively large (1920x1080), the GetCurrentImage() function of the IVMRWindowlessControl9 takes quite a while to process (2-3 seconds). I've stepped through my code and can confirm that this call is the only one that takes a long time to process.
Because of this, the GUI thread in my main WPF window hangs, causing it to be unresponsive for a few seconds, so if I want to display a progress spinner while the image is being captured, it will just remain frozen.
Here is the code for CaptureImage():
public Bitmap CaptureImage()
{
if (!IsCapturing)
return null;
this.mediaControl.Stop();
IntPtr currentImage = IntPtr.Zero;
Bitmap bmp = null;
try
{
int hr = this.windowlessControl.GetCurrentImage(out currentImage);
DsError.ThrowExceptionForHR(hr);
if (currentImage != IntPtr.Zero)
{
BitmapInfoHeader bih = new BitmapInfoHeader();
Marshal.PtrToStructure(currentImage, bih);
...
// Irrelevant code removed
...
bmp = new Bitmap(bih.Width, bih.Height, stride, pixelFormat, new IntPtr(currentImage.ToInt64() + Marshal.SizeOf(bih)));
bmp.RotateFlip(RotateFlipType.RotateNoneFlipY);
}
}
catch (Exception ex)
{
MessageBox.Show("Failed to capture image:" + ex.Message);
}
finally
{
Marshal.FreeCoTaskMem(currentImage);
}
return bmp;
}
In order to fix this, I've tried to run this as a background task as follows:
public async void CaptureImageAsync()
{
try
{
await Task.Run(() =>
{
CaptureImage();
});
}
catch(Exception ex)
{
MessageBox.Show(ex.Message);
}
}
I've tried multiple ways to do this including using BackgroundWorkers, but it seems that any time I make this call asynchronously, it creates this error:
Unable to cast COM object of type 'DirectShowLib.VideoMixingRenderer9'
to interface type 'DirectShowLib.IVMRWindowlessControl9'. This
operation failed because the QueryInterface call on the COM component
for the interface with IID '{8F537D09-F85E-4414-B23B-502E54C79927}'
failed due to the following error: No such interface supported
(Exception from HRESULT: 0x80004002 (E_NOINTERFACE)).
The error always happens when on this line:
int hr = this.windowlessControl.GetCurrentImage(out currentImage);
Calling CaptureImage() synchronously yields normal results. The image is captured and everything works as intended. However, switching to using any kind of asynchronous functionality gives that error.
You are having two issues here. First of all the original slowliness of the API is behavior by design. MSDN mentions this as:
However, frequent calls to this method will degrade video playback performance.
Video memory might be pretty slow for read-backs - this is the 2-3 second processing problem, not the resolution of the image per se. Bad news is that it is highly likely that even polling snapshots from background thread is going to affect the visual stream.
This method was and is intended for taking sporadic snapshots, esp. those initiated by user interactively, not automated. Applications needing more intensive and automated, and those not affecting visual feed snapshots are supposed to intercept the feed before it is sent to video memory (there are options to do it, and most popular yet clumsy is use of Sample Grabber).
Second, you are likely to be hitting .NET threading issue described in this question, which triggers the mentioned exception. It is easy to use the same interface pointer in native code development by sneaky violating COM threading rules and passing the interface pointer between apartments. Since CLR is adding a middle layer between your code and COM object doing additional safety checks, you can no longer operate with the COM object/interfaces from background thread because COM threading rules are enforced.
I suppose you either have to keep suffering from long freezes associated with direct API calls, or add native code development that helps to bypass the freezes (especially, for example, a helper filter which captures frames before sending to video memory and in the same time implementing helper functionality for .NET caller to support background thread calls). Presumably you can also do all DirectShow related stuff in a pool of helper MTA threads which solve you the background thread caller problem, but in this case you will most likely need to move this code from your UI thread which is STA - I don't think it something people often, if at all, doing.
Ive checked already some answers, but still Im not convinced what is this right approach of acquiring and releasing COM objects in parallel.
In particular I use a Parallel.ForEach to increase performance, and inside it, it makes calls to MS.Outlook (2010 ExchangeServer). However, by releasing the COM objects I get occasionally COMExceptions.
What is the right approach of working with COM objects with the Parallel library ?
System.Threading.Tasks.Parallel.ForEach(myList, myItem =>
{
String freeBusySlots = "";
Outlook.Recipient myReceipient = null;
try
{
myReceipient = namespaceMAPI.CreateRecipient(myItem.ToString());
}
catch (Exception ex)
{
...
}
finally
{
if (myReceipient == null)
{
...
}
Marshal.ReleaseComObject(myReceipient ); // -> I get an exception here sometimes ... how to avoid this
myReceipient = null;
}
}); // Parllel.forEach
Outlook Object Model cannot be used from secondary threads. Sometimes it works, but it tends to bomb out at the most inappropriate moment.
As of Outlook 2013, Outlook will immediately raise an error if an OOM object is accessed from a secondary thread.
If your code is running from another application, keep in mind that all calls will be serialized to the main Outlook thread anyway so there is really no point using multiple threads.
Also note that Extended MAPI (C++ or Delphi only) or Redemption (which wraps Extended MAPI and can be accessed from any language - I am its author) can be used from multiple threads, but your mileage will vary depending on the particular MAPI provider (IMAP4 store provider is the worst).
As a general rule you should never be using ReleaseComObject. Instead just wait for the RCW object to be collected and let the GC do the work for you. Using this API correctly is extremely hard.
Additionally it's very likely that all of Outlooks COM objects live in the STA. If that is the case there is nothing gained by operating over them in parallel. Every call made from a background thread will simply be marashalled back to the foreground thread for processing. Hence the background thread will add no value, just confusion.
I'm not 100% certain these are STA objects but I'd be very surprised if they weren't.