Here is one simplified example I found that really hard to debug deadlock in awaited tasks in some cases:
class Program
{
static void Main(string[] args)
{
var task = Hang();
task.Wait();
}
static async Task Hang()
{
var tcs = new TaskCompletionSource<object>();
// do some more stuff. e.g. another await Task.FromResult(0);
await tcs.Task;
tcs.SetResult(0);
}
}
This example is easy to understand why it deadlocks, it is awaiting a task which is finished later on. It looks stupid but similar scenario could happen in more complicated production code and deadlocks could be mistakenly introduced due to lack of multithreading experience.
Interesting thing for this example is inside Hang method there is no thread blocking code like Task.Wait() or Task.Result. Then when I attach VS debugger, it just shows the main thread is waiting for the task to finish. However, there is no thread showing where the code has stopped inside Hang method using Parallel Stacks view.
Here are the call stacks on each thread (3 in all) I have in the Parallel Stacks:
Thead 1:
[Managed to Native Transition]
Microsoft.VisualStudio.HostingProcess.HostProc.WaitForThreadExit
Microsoft.VisualStudio.HostingProcess.HostProc.RunParkingWindowThread
System.Threading.ThreadHelper.ThreadStart_Context
System.Threading.ExecutionContext.RunInternal
System.Threading.ExecutionContext.Run
System.Threading.ExecutionContext.Run
System.Threading.ThreadHelper.ThreadStart
Thread 2:
[Managed to Native Transition]
Microsoft.Win32.SystemEvents.WindowThreadProc
System.Threading.ThreadHelper.ThreadStart_Context
System.Threading.ExecutionContext.RunInternal
System.Threading.ExecutionContext.Run
System.Threading.ExecutionContext.Run
System.Threading.ThreadHelper.ThreadStart
Main Thread:
System.Threading.Monitor.Wait
System.Threading.Monitor.Wait
System.Threading.ManualResetEventSlim.Wait
System.Threading.Tasks.Task.SpinThenBlockingWait
System.Threading.Tasks.Task.InternalWait
System.Threading.Tasks.Task.Wait
System.Threading.Tasks.Task.Wait
ConsoleApplication.Program.Main Line 12 //this is our Main function
[Native to Managed Transition]
[Managed to Native Transition]
System.AppDomain.ExecuteAssembly
Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly
System.Threading.ThreadHelper.ThreadStart_Context
System.Threading.ExecutionContext.RunInternal
System.Threading.ExecutionContext.Run
System.Threading.ExecutionContext.Run
System.Threading.ThreadHelper.ThreadStart
Is there anyway to find out where the task has stopped inside Hang method? And the call stack if possible? I believe internally there must be some states about each task and their continuation points so that the scheduler could work. But I don't know how to check that out.
Inside of Visual Studio, I am not aware of a way to debug this sort of situation simply. However, there are two other ways to visualize this for full framework applications, plus a bonus preview of a way to do this in .NET Core 3.
tldr version: Yep, its hard, and Yep, the information you want is there, just difficult to find. Once you find the heap objects as per the below methods, you can use the address of them in the VS watch window to use the visualizers to take a deeper dive.
WinDbg
WinDbg has a primitive but helpful extension that provides a !dumpasync command.
If you download the extension from the vs-threading release branch and copy the x64 and x86 AsyncDebugTools.dll to C:\Program Files (x86)\Windows Kits\10\Debuggers\[x86|x64]\winext folders, you can do the following:
.load AsyncDebugTools
!dumpasync
The output (taken from the link above) looks like:
07494c7c <0> Microsoft.Cascade.Rpc.RpcSession+<SendRequestAsync>d__49
.07491d10 <1> Microsoft.Cascade.Agent.WorkspaceService+<JoinRemoteWorkspaceAsync>d__28
..073c8be4 <5> Microsoft.Cascade.Agent.WorkspaceService+<JoinWorkspaceAsync>d__22
...073b7e94 <0> Microsoft.Cascade.Rpc.RpcDispatcher`1+<>c__DisplayClass23_2+<<BuildMethodMap>b__2>d[[Microsoft.Cascade.Contracts.IWorkspaceService, Microsoft.Cascade.Common]]
....073b60e0 <0> Microsoft.Cascade.Rpc.RpcServiceUtil+<RequestAsync>d__3
.....073b366c <0> Microsoft.Cascade.Rpc.RpcSession+<ReceiveRequestAsync>d__42
......073b815c <0> Microsoft.Cascade.Rpc.RpcSession+<>c__DisplayClass40_1+<<Receive>b__0>d
On your sample above the output is less interesting:
033a23c8 <0> StackOverflow41476418.Program+<Hang>d__1
The description of the output is:
The output above is a set of stacks – not exactly callstacks, but actually "continuation stacks". A continuation stack is synthesized based on what code has 'awaited' the call to an async method. It's possible that the Task returned by an async method was awaited from multiple places (e.g. the Task was stored in a field, then awaited by multiple interested parties). When there are multiple awaiters, the stack can branch and show multiple descendents of a given frame. The stacks above are therefore actually "trees", and the leading dots at each frame helps recognize when trees have multiple branches.
If an async method is invoked but not awaited on, the caller won't appear in the continuation stack.
Once you see the nested hierarchy for more complex situations, you can at least deep-dive into the state objects and find their continuations and roots.
LinqPad and ClrMd
Another useful too is LinqPad coupled with ClrMd and ClrMD.Extensions. The latter package is used to bridge ClrMd into LINQPad - there is a getting started guide. Once you have the packages/namespaces set, this query is what you want:
var session = ClrMD.Extensions.ClrMDSession.LoadCrashDump(#"dmpfile.dmp");
var stateMachineTypes = (
from type in session.Heap.EnumerateTypes()
where type.Interfaces.Any(item => item.Name == "System.Runtime.CompilerServices.IAsyncStateMachine")
select type);
session.Heap.EnumerateDynamicObjects(stateMachineTypes).Dump(2);
Below is a sample of the output running on your sample code:
DotNet Core 3
For .NET Core 3.x they added !dumpasync into the WinDbg sos extension. It is MUCH better than the extension described above, as it gives much more context. You can see it is part of a much larger user story to improve debugging of async code. Here is the output from that under .NET Core 3.0 preview 6 with a preview 7 version of SOS with extended options. Note that line numbers are present, something you don't get with the above options.:
0:000> !dumpasync -stacks -roots
Statistics:
MT Count TotalSize Class Name
00007ffb564e9be0 1 96 System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1+AsyncStateMachineBox`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib],[StackOverflow41476418_Core.Program+<Hang>d__1, StackOverflow41476418_Core]]
Total 1 objects
In 1 chains.
Address MT Size State Description
00000209915d21a8 00007ffb564e9be0 96 0 StackOverflow41476418_Core.Program+<Hang>d__1
Async "stack":
.00000209915d2738 System.Threading.Tasks.Task+SetOnInvokeMres
GC roots:
Thread bc20:
000000e08057e8c0 00007ffbb580a292 System.Threading.Tasks.Task.SpinThenBlockingWait(Int32, System.Threading.CancellationToken) [/_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs # 2939]
rbp+10: 000000e08057e930
-> 00000209915d21a8 System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1+AsyncStateMachineBox`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib],[StackOverflow41476418_Core.Program+<Hang>d__1, StackOverflow41476418_Core]]
000000e08057e930 00007ffbb580a093 System.Threading.Tasks.Task.InternalWaitCore(Int32, System.Threading.CancellationToken) [/_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs # 2878]
rsi:
-> 00000209915d21a8 System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1+AsyncStateMachineBox`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib],[StackOverflow41476418_Core.Program+<Hang>d__1, StackOverflow41476418_Core]]
000000e08057e9b0 00007ffbb5809f0a System.Threading.Tasks.Task.Wait(Int32, System.Threading.CancellationToken) [/_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs # 2789]
rsi:
-> 00000209915d21a8 System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1+AsyncStateMachineBox`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib],[StackOverflow41476418_Core.Program+<Hang>d__1, StackOverflow41476418_Core]]
Windows symbol path parsing FAILED
000000e08057ea10 00007ffb56421f17 StackOverflow41476418_Core.Program.Main(System.String[]) [C:\StackOverflow41476418_Core\Program.cs # 12]
rbp+28: 000000e08057ea38
-> 00000209915d21a8 System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1+AsyncStateMachineBox`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib],[StackOverflow41476418_Core.Program+<Hang>d__1, StackOverflow41476418_Core]]
000000e08057ea10 00007ffb56421f17 StackOverflow41476418_Core.Program.Main(System.String[]) [C:\StackOverflow41476418_Core\Program.cs # 12]
rbp+30: 000000e08057ea40
-> 00000209915d21a8 System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1+AsyncStateMachineBox`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib],[StackOverflow41476418_Core.Program+<Hang>d__1, StackOverflow41476418_Core]]
Related
With .NET 7's NativeAOT compilation. We can now load a C# dll as regular Win32 module.
HMODULE module = LoadLibraryW("AOT.dll");
auto hello = GetProcAddress(module, "Hello");
hello();
This works fine and prints some stuff in console.
However, when unloading the dll. It simply doesn't work. No matter how many times I call FreeLibrary("AOT.dll"), GetModuleHandle("AOT.dll") still returns the handle to the module, implying that it did not unload successfully.
My "wild guess" was that the runtime has some background threads still running (GC?), so I enumerated all threads and use NtQueryInformationThread to retrive the start address of each thread then call GetModuleHandleEx with GET_MODULE_HANDLE_EX_FLAG_FROM_ADDRESS to get the module where the thread started, the result were as follows.
Before:
THREAD ID = 7052
base priority = 8
delta priority = 0
Start address: 00007FF69D751613
Module: 00007FF69D740000 => CppRun.exe
THREAD ID = 3248
base priority = 8
delta priority = 0
Start address: 00007FFEF1F42B20
Module: 00007FFEF1EF0000 => ntdll.dll
THREAD ID = 7160
base priority = 8
delta priority = 0
Start address: 00007FFEF1F42B20
Module: 00007FFEF1EF0000 => ntdll.dll
After:
THREAD ID = 7052
base priority = 8
delta priority = 0
Start address: 00007FF69D751613
Module: 00007FF69D740000 => CppRun.exe
THREAD ID = 3248
base priority = 8
delta priority = 0
Start address: 00007FFEF1F42B20
Module: 00007FFEF1EF0000 => ntdll.dll
THREAD ID = 7160
base priority = 8
delta priority = 0
Start address: 00007FFEF1F42B20
Module: 00007FFEF1EF0000 => ntdll.dll
THREAD ID = 5944
base priority = 8
delta priority = 0
Start address: 00007FFEF1F42B20
Module: 00007FFEF1EF0000 => ntdll.dll
THREAD ID = 17444
base priority = 10
delta priority = 0
Start address: 00007FFE206DBEF0
Module: 00007FFE206D0000 => AOT.dll
"CppRun.exe" is my testing application.
As you can see, two additional threads were spawned. One from ntdll (5944), and one from my AOT compiled dll (17444).
I don't know what the leftover thread in "AOT.dll" was for (maybe GC?), but I force-terminated it successfully (definitely unhealthy, I know).
However, when I tried to open the thread in ntdll (5944), it throws an exception
An invalid thread, handle %p, is specified for this operation. Possibly, a threadpool worker thread was specified
Given that, I assume .NET starts a threadpool worker during initilization? How can I stop that pool and unload the dll?
Or, is there a better way for unloading a NativeAOT compiled dll?
Update: I've hooked the CreateThreadPool function, but the runtime doesn't call it. Still trying to figure out what spawned that thread.
Edit:
NativeAOT(aka CoreRT) compiled dll was unloadable at first, but Microsoft later blocked the functionality due to memory leak and crash on process exit. See this PR for more details.
This answer simply restores the functionality using detour hook and does not deal with the memory leak nor the crash. Use it at your own risk.
I was able to prevent the access violation crash by manually freeing the FLS(fiber-local storage) created by .NET. Here is a simple demo.
Original answer below:
Turns out that thread is used by Windows 10 for parallel library loading(TppWorkerThread) and isn't the problem.
I ended up inspecting the winapi call with this handy tool, and found that .NET is calling GetModuleHandleEx with the GET_MODULE_HANDLE_EX_FLAG_PIN flag, thus preventing the module from unloading.
So I hooked GetModuleHandleEx to intercept calls and shift out the flag.
Voila! Now I can unload the NativeAOT compiled dll.
I know this approach is quite hacky, but at least it works.
If anyone happen to have a better solution, please let me know.
I'm investigating a handle leak in a WCF service, running on .NET 4.6.2. The service runs fine but over time handle count keeps increasing, with thousands of Token type handles sitting around in the process. It seems memory is also leaking very slowly (likely related to the handle leak).
Edit: it looks like event and thread handles are also leaked.
Process Explorer shows that the suspect handles all have the same name:
DOMAIN\some.username$:183db90
and all share the same address.
I attached WinDbg to the process and ran !htrace -enable and then !htrace -diff some time later. This gave me a list of almost 2000 newly opened handles and native stack traces, like this one:
Handle = 0x000000000000b02c - OPEN
Thread ID = 0x000000000000484c, Process ID = 0x0000000000002cdc
0x00007ffc66e80b3a: ntdll!NtCreateEvent+0x000000000000000a
0x00007ffc64272ce8: KERNELBASE!CreateEventW+0x0000000000000084
0x00007ffc5b392e0a: clr!CLREventBase::CreateManualEvent+0x000000000000003a
0x00007ffc5b3935c7: clr!Thread::AllocHandles+0x000000000000007b
0x00007ffc5b3943c7: clr!Thread::CreateNewOSThread+0x000000000000007f
0x00007ffc5b394308: clr!Thread::CreateNewThread+0x0000000000000090
0x00007ffc5b394afb: clr!ThreadpoolMgr::CreateUnimpersonatedThread+0x00000000000000cb
0x00007ffc5b394baf: clr!ThreadpoolMgr::MaybeAddWorkingWorker+0x000000000000010c
0x00007ffc5b1d8c74: clr!ManagedPerAppDomainTPCount::SetAppDomainRequestsActive+0x0000000000000024
0x00007ffc5b1d8d27: clr!ThreadpoolMgr::SetAppDomainRequestsActive+0x000000000000003f
0x00007ffc5b1d8cae: clr!ThreadPoolNative::RequestWorkerThread+0x000000000000002f
0x00007ffc5a019028: mscorlib_ni+0x0000000000549028
0x00007ffc59f5f48f: mscorlib_ni+0x000000000048f48f
0x00007ffc59f5f3b9: mscorlib_ni+0x000000000048f3b9
Another stack trace (a large portion of the ~2000 new handles have this):
Handle = 0x000000000000a0c8 - OPEN
Thread ID = 0x0000000000003614, Process ID = 0x0000000000002cdc
0x00007ffc66e817aa: ntdll!NtOpenProcessToken+0x000000000000000a
0x00007ffc64272eba: KERNELBASE!OpenProcessToken+0x000000000000000a
0x00007ffc5a01aa9b: mscorlib_ni+0x000000000054aa9b
0x00007ffc5a002ebd: mscorlib_ni+0x0000000000532ebd
0x00007ffc5a002e68: mscorlib_ni+0x0000000000532e68
0x00007ffc5a002d40: mscorlib_ni+0x0000000000532d40
0x00007ffc5a0027c7: mscorlib_ni+0x00000000005327c7
0x00007ffbfbfb3d6a: +0x00007ffbfbfb3d6a
When I run the !handle 0 0 command in WinDbg, I get the following result:
21046 Handles
Type Count
None 4
Event 2635 **
Section 360
File 408
Directory 4
Mutant 9
Semaphore 121
Key 77
Token 16803 **
Thread 554 **
IoCompletion 8
Timer 3
TpWorkerFactory 2
ALPC Port 7
WaitCompletionPacket 51
The ones marked with ** are increasing over time, although at a different rate.
Edit 3:
I ran !dumpheap to see the number of Thread objects (unrelated classes removed):
!DumpHeap -stat -type System.Threading.Thread
Statistics:
MT Count TotalSize Class Name
00007ffc5a152bb0 745 71520 System.Threading.Thread
Active thread count fluctuates between 56 and 62 in Process Explorer as the process is handling some background tasks periodically.
Some of the stack traces are different but they're all native traces so I don't know what managed code triggered the handle creation. Is it possible to get the managed function call that's running on the newly created thread when Thread::CreateNewThread is called? I don't know what WinDbg command I'd use for this.
Note:
I cannot attach sample code to the question because the WCF service loads hundreds of DLL-s, most of which are built from many source files - I have some very vague suspicions in what major area this may come from but I don't know any details to show an MCVE.
Edit: after Harry Johnston's comments below, I noticed that thread handle count is also increasing - I overlooked this earlier because of the high number of token handles.
I have a handle leak in a C# program. I'm trying to diagnose it using WinDbg using !htrace, roughly as presented in this answer, but when I run !htrace -diff in WinDbg I'm presented with stack traces that don't show the names of my C# functions (or even my .net assembly).
I created a small test program to illustrate my difficulty. This program does nothing except "leak" handles.
class Program
{
static List<Semaphore> handles = new List<Semaphore>();
static void Main(string[] args)
{
while (true)
{
Fun1();
Thread.Sleep(100);
}
}
static void Fun1()
{
handles.Add(new Semaphore(0, 10));
}
}
I compiled the assembly, and then in WinDbg I go "File" -> "Open Executable" and select my program (D:\Projects\Sandpit\bin\Debug\Sandpit.exe). I continue program execution, break it, and run "!htrace -enable", then continue for a bit longer, and then break and run "!htrace -diff". This is what I get:
0:004> !htrace -enable
Handle tracing enabled.
Handle tracing information snapshot successfully taken.
0:004> g
(1bd4.1c80): Break instruction exception - code 80000003 (first chance)
eax=7ffda000 ebx=00000000 ecx=00000000 edx=77b2f17d esi=00000000 edi=00000000
eip=77ac410c esp=0403fc20 ebp=0403fc4c iopl=0 nv up ei pl zr na pe nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246
ntdll!DbgBreakPoint:
77ac410c cc int 3
0:004> !htrace -diff
Handle tracing information snapshot successfully taken.
0xd new stack traces since the previous snapshot.
Ignoring handles that were already closed...
Outstanding handles opened since the previous snapshot:
--------------------------------------
Handle = 0x00000250 - OPEN
Thread ID = 0x00001b58, Process ID = 0x00001bd4
0x77ad5704: ntdll!ZwCreateSemaphore+0x0000000c
0x75dcdcf9: KERNELBASE!CreateSemaphoreExW+0x0000005e
0x75f5e359: KERNEL32!CreateSemaphoreW+0x0000001d
*** WARNING: Unable to verify checksum for C:\Windows\assembly\NativeImages_v4.0.30319_32\System\13c079cdc1f4f4cb2f8f1b66c8642faa\System.ni.dll
0x65d7e805: System_ni+0x0020e805
0x65d47843: System_ni+0x001d7843
0x65d477ef: System_ni+0x001d77ef
0x004700c9: +0x004700c9
0x67d73dd2: clr!CallDescrWorkerInternal+0x00000034
0x67d9cf6d: clr!CallDescrWorkerWithHandler+0x0000006b
0x67d9d267: clr!MethodDescCallSite::CallTargetWorker+0x00000152
0x67eb10e0: clr!RunMain+0x000001aa
0x67eb1200: clr!Assembly::ExecuteMainMethod+0x00000124
--------------------------------------
Handle = 0x0000024c - OPEN
Thread ID = 0x00001b58, Process ID = 0x00001bd4
0x77ad5704: ntdll!ZwCreateSemaphore+0x0000000c
0x75dcdcf9: KERNELBASE!CreateSemaphoreExW+0x0000005e
0x75f5e359: KERNEL32!CreateSemaphoreW+0x0000001d
0x65d7e805: System_ni+0x0020e805
0x65d47843: System_ni+0x001d7843
0x65d477ef: System_ni+0x001d77ef
0x004700c9: +0x004700c9
0x67d73dd2: clr!CallDescrWorkerInternal+0x00000034
0x67d9cf6d: clr!CallDescrWorkerWithHandler+0x0000006b
0x67d9d267: clr!MethodDescCallSite::CallTargetWorker+0x00000152
0x67eb10e0: clr!RunMain+0x000001aa
0x67eb1200: clr!Assembly::ExecuteMainMethod+0x00000124
...
--------------------------------------
Displayed 0xd stack traces for outstanding handles opened since the previous snapshot.
As you can see, the stack trace is missing my C# function names "Main" and "Fun1". I believe "System_ni+0x..." frames may be my function frames, but I don't know. My question is, how do I get WinDbg to display function names for my C# functions in the stack trace?
Extra information:
My WinDbg symbol search path is
SRVC:/symbolshttp://msdl.microsoft.com/download/symbols;D:\Projects\Sandpit\bin\Debug;srv*
I don't get any errors when I open the executable in WinDbg. There is a file called "Sandpit.pdb" in the output directory ("D:\Projects\Sandpit\bin\Debug"). The project is built locally so the pdb file should match the exe. I downloaded WinDbg from here. I have "Enable native code debugging" checked in the project settings in Visual Studio.
WinDbg attempts to interpret the native call stack as best it can, however to fully interpret the stack of a CLR application WinDbg needs to use an extension called SOS. This extension has a separate command CLRStack for viewing the stack information of CLR stacks. You will need to load the SOS extension first however using the .loadby sos clr command (or similar, I remember getting the correct version SOS to load could be a bit of a pain)
For more information see
WinDbg 101–A step by step guide to finding a simple memory leak in your .Net application
WinDbg / SOS Cheat Sheet
We have a rich client application developed using WPF/C#.Net 4.0 which interops with in-house COM DLLs. Regular events are raised via this COM interface containing video data.
As part of the application we render video via Windows Media Foundation and have created interops to use Window Media Foundation. We have multiple WMF pipelines rendering different video at the same time.
The application runs for 6-8 hours rendering video. Private bytes remaining consistently steady during this time (say around 500-600MB).
At some point the application appears to hang, at this point private bytes increases very rapidly until the process consumes approximately 1.4GB of memory and crashes with an OutOfMemoryException.
We have reproduced this on 5 different workstations with different graphic cards (NVIDIA and ATI cards) and a mixture of Windows 7 32 and 64bit.
We have analyzed 3 dump files and found that the finalizer thread is waiting on a call to the ole32.GetToSTA() method. We are unable to determine what causes the finalizer thread to block and how to resolve this. I have pasted excerpts from three dumps we've been analyzing:
Dump 1)
Thread 2:ae0 is waiting on an STA thread efc
Thread 28:efc is calling a WaitForSingleObject. The handle it is waiting on is actually a thread handle 5ab4 which is thread id 14a4
Thread 130:14a4 has the following stack:
37f4fdf4 753776a6 ntdll!NtRemoveIoCompletion+0x15
37f4fe20 63301743 KERNELBASE!GetQueuedCompletionStatus+0x29
37f4fe74 6330d0db WMNetMgr!CNSIoCompletionPortNT::WaitAndServeCompletionsLoop+0x5e
37f4fe94 633199bf WMNetMgr!CNSIoCompletionPortNT::WaitAndServeCompletions+0x4c
37f4fecc 63312dbd WMNetMgr!CWorkThreadManager::CWorkerThread::ThreadMain+0xa2
37f4fed8 769b3677 WMNetMgr!CWMThread::ThreadFunc+0x3b
37f4fee4 77679f42 kernel32!BaseThreadInitThunk+0xe
37f4ff24 77679f15 ntdll!__RtlUserThreadStart+0x70
37f4ff3c 00000000 ntdll!_RtlUserThreadStart+0x1b
Dump2)
STA thread:
1127f474 75f80a91 ntdll!ZwWaitForSingleObject+0x15
1127f4e0 77411184 KERNELBASE!WaitForSingleObjectEx+0x98
1127f4f8 77411138 kernel32!WaitForSingleObjectExImplementation+0x75
1127f50c 63ae5f29 kernel32!WaitForSingleObject+0x12
1127f530 63a8eb2e WMNetMgr!CWMThread::Wait+0x78
1127f54c 63a8f128 WMNetMgr!CWorkThreadManager::CThreadPool::Shutdown+0x70
1127f568 63a76e10 WMNetMgr!CWorkThreadManager::Shutdown+0x34
1127f59c 63a76f2d WMNetMgr!CNSClientNetManagerHelper::Shutdown+0xdd
1127f5a4 63cd228e WMNetMgr!CNSClientNetManager::Shutdown+0x66
WARNING: Stack unwind information not available. Following frames may be wrong.
1127f5bc 63cd23a6 WMVCORE!WMCreateProfileManager+0xeef6
1127f5dc 63c573ca WMVCORE!WMCreateProfileManager+0xf00e
1127f5e8 63c62f18 WMVCORE!WMIsAvailableOffline+0x2ba3b
1127f618 63c19da6 WMVCORE!WMIsAvailableOffline+0x37589
1127f630 63c1aca2 WMVCORE!WMIsContentProtected+0x56e4
1127f63c 63c14bd7 WMVCORE!WMIsContentProtected+0x65e0
1127f650 113de6e8 WMVCORE!WMIsContentProtected+0x515
1127f660 113de513 wmp!CWMDRMReaderStub::CExternalStub::ShutdownInternalRefs+0x1d0
1127f674 113c1988 wmp!CWMDRMReaderStub::ExternalRelease+0x4f
1127f67c 1160a5b9 wmp!CWMDRMReaderStub::CExternalStub::Release+0x13
1127f6a4 1161745f wmp!CWMGraph::CleanupUpStream_selfprotected+0xbe
Finalizer thread is trying to switch to STA:
0126eccc 75f80a91 ntdll!ZwWaitForSingleObject+0x15
0126ed38 77411184 KERNELBASE!WaitForSingleObjectEx+0x98
0126ed50 77411138 kernel32!WaitForSingleObjectExImplementation+0x75
0126ed64 75d78907 kernel32!WaitForSingleObject+0x12
0126ed88 75e9a819 ole32!GetToSTA+0xad
Dump3)
The finalizer thread is in the GetToSTA call, so it is waiting for a COM object to free
Thread 29 is a COM object in the STA, and it is waiting on a critical section owned by thread 53 (1bf4)
Thread 53 is doing:
1cbcf990 76310a91 ntdll!ZwWaitForSingleObject+0x15
1cbcf9fc 74cb1184 KERNELBASE!WaitForSingleObjectEx+0x98
1cbcfa14 74cb1138 kernel32!WaitForSingleObjectExImplementation+0x75
1cbcfa28 65dfb6bb kernel32!WaitForSingleObject+0x12
WARNING: Stack unwind information not available. Following frames may be wrong.
1cbcfa48 74cb3677 wmp!Ordinal3000+0x53280
1cbcfa54 77029f42 kernel32!BaseThreadInitThunk+0xe
1cbcfa94 77029f15 ntdll!__RtlUserThreadStart+0x701cbcfaac 00000000 ntdll!_RtlUserThreadStart+0x1b
Any ideas on how we might resolve this issue?
Well, the finalizer thread is deadlocked. That will certainly result in an eventual OOM. We can't see the full stack trace for the finalizer thread but some odds that you'll see SwitchAptAndDispatchCall() and ReleaseRCWListInCorrectCtx() in the trace, indicating that it is trying to call IUnknown::Release() to release a COM object. And that object is apartment threaded so a thread switch is required to safely make the call.
I don't see any decent candidates in the stack traces you posted, possibly because you didn't get the right one or the thread is already busy shutting down due to the exception. Try to catch it earlier with a debugger break as soon as you see the virtual memory size climb.
The most common cause for a deadlock like this is violating the requirements for an STA thread. Which state that it must never block and must pump a message loop. The never-block requirement is typically easily met in a .NET program, the CLR will pump a message loop when necessary when you use the lock statement or a WaitHandle.WaitXxx() call. It is however very common to forget to pump a message loop, especially since doing so is kinda painful. Application.Run() is required.
I programed the windows service to do a routine work.
I InstallUtil it to windows service and it'will wake up and do something and then thread.sleep(5min)
The code is simple, but I've noticed a potential memory leak. I traced it using DOS tasklist and drew a chart:
Can I say that it's pretty clear there was memory leak, although so little.
My code is like below, Please help me to find the potential leak. Thanks.
public partial class AutoReport : ServiceBase
{
int Time = Convert.ToInt32(AppSettings["Time"].ToString());
private Utilities.RequestHelper requestHelper = new RequestHelper();
public AutoReport()
{
InitializeComponent();
}
protected override void OnStart(string[] args)
{
Thread thread = new Thread(new ParameterizedThreadStart(DoWork));
thread.Start();
}
protected override void OnStop()
{
}
public void DoWork(object data)
{
while (true)
{
string jsonOutStr = requestHelper.PostDataToUrl("{\"KeyString\":\"somestring\"}", "http://myurl.ashx");
Thread.Sleep(Time);
}
}
}
Edit: After using WinDbg #Russell suggested. What should I do to these classes?
MT Count TotalSize ClassName
79330b24 1529 123096 System.String
793042f4 471 41952 System.Object[]
79332b54 337 8088 System.Collections.ArrayList
79333594 211 70600 System.Byte[]
79331ca4 199 3980 System.RuntimeType
7a5e9ea4 159 2544 System.Collections.Specialized.NameObjectCollectionBase+NameObjectEntry
79333274 143 30888 System.Collections.Hashtable+bucket[]
79333178 142 7952 System.Collections.Hashtable
79331754 121 57208 System.Char[]
7a5d8120 100 4000 System.Net.LazyAsyncResult
00d522e4 95 5320 System.Configuration.FactoryRecord
00d54d60 76 3952 System.Configuration.ConfigurationProperty
7a5df92c 74 2664 System.Net.CoreResponseData
7a5d8060 74 5032 System.Net.WebHeaderCollection
79332d70 73 876 System.Int32
79330c60 73 1460 System.Text.StringBuilder
79332e4c 72 2016 System.Collections.ArrayList+ArrayListEnumeratorSimple
7.93E+09 69 1380 Microsoft.Win32.SafeHandles.SafeTokenHandle
7a5e0d0c 53 1060 System.Net.HeaderInfo
7a5e4444 53 2120 System.Net.TimerThread+TimerNode
79330740 52 624 System.Object
7a5df1d0 50 2000 System.Net.AuthenticationState
7a5e031c 50 5800 System.Net.ConnectStream
7aa46f78 49 588 System.Net.ConnectStreamContext
793180f4 48 960 System.IntPtr[]
This is how I'd go about finding the memory leak:
1) Download WinDbg if you don't already have it. It's a really powerful (although difficult to use as it's complicated) debugger.
2) Run WinDbg and attach it to your process by pressing F6 and selecting your exe.
3) When it has attached type these commands: (followed by enter)
//this will load the managed extensions
.loadby sos clr
//this will dump the details of all your objects on the heap
!dumpheap -stat
//this will start the service again
g
Now wait a few minutes and type Ctrl+Break to break back into the service. Run the !Dumpheap -stat command again to find out what is on the heap now. If you have a memory leak (in managed code) then you will see one or more of your classes keep getting added to the heap over time. You now know what is being kept in memory so you know where to look for the problem in your code. You can work out what is holding references to the objects being leaked from within WinDbg if you like but it's a complicated process. If you decide to use WinDbg then you probably want to start by reading Tess's blog and doing the labs.
you will need to use an Allocation Profiler to Detect Memory Leaks, there are some good profiler for that, i can recommend the AQTime (see this video)
And read: How to Locate Memory Leaks Using the Allocation Profiler
Maby this article can be helpfull too
To find a memory leak, you should look at the performance counters on a long period of time. If you see the number of handles or total bytes in all heap growing without never decreasing, you have a real memory leak. Then, you can use for example profiling tools in visual studio to track the leak. There is also a tool from redgate which works quite well.
Difficult to say, but I suspect this line in your while loop within DoWork:
JsonIn jsonIn = new JsonIn { KeyString = "secretekeystring", };
Although jsonin only has scope within the while block, I would hazard that the Garbage Collector is taking its time to remove unwanted instances.