I have created an application which reads a file and fetches meta data of the file. When I launch the application the private working set is around 8MB (as viewed in Task Manager). When I scan the file, the memory shoots up to 150MB and stays there. If I add additional file using the same instance of the application the memory piles on. To understand this behavior, I used a memory profiler (Red gates) which showed me the following statistics :-
Out of the 150MB of private worker set memory
Unmanaged Memory :94MB
Others Resources (string,array etc) : 30MB
This puzzles me as I am not using any un-managed code nor any Pinvoke calls. I have also tried GC.Collect() without success.
Can someone please guide me as to how I can reduce the Unmanaged memory usage of my application and what could be the possible causes for the same.
Thanks in Advance
A recent update to ANTS shows you which .NET classes your application's unmanaged memory is used by. After enabling unmanaged memory profiling in the setup screen, navigate to the class list and sort by the new 'unmanaged size' column.
Though you may not have knowingly used unmanaged memory, many .NET Framework libraries do use native resources - for example the imaging libraries.
Related
What is the reason for pinned GC handles when working with unmanaged .net components? This happens from time to time without any code changed or something else. When investigating the issue, I see a lot of pinned GC-Handles
These handles seem to stick in the memory for the entire application lifetime. In this case, the library is GdPicture (14). Is there any way to investigate why those instances are not cleaned up? I'm using Dispose()/using everywhere and can't find any GC roots in the managed code.
Thanks a lot!
EDIT
Another behaviour that is strange is, that the task manager shows that the application uses about 6GB ram, when the memory profiler shows the usage of 400MB (red line is live bytes)
What is the reason for pinned GC handles when working with unmanaged .net components?
Pinning is needed when working with unmanaged code. It prevents objects from being moved during garbage collection so that the unmanaged code can have a pointer to it. The garbage collector will update all .NET references, but it will not update unmanaged pointer values.
Is there any way to investigate why those instances are not cleaned up?
No. The reason always is: there's a bug in the code. Either your code (assume that first) or in a 3rd party library (libraries are used often, chances are that leaks in the library have been found by someone else before).
I'm using Dispose()/using everywhere
Seems like you missed one or it's not using the disposable pattern.
Another behaviour that is strange is, that the task manager shows that the application uses about 6GB ram, when the memory profiler shows the usage of 400MB (red line is live bytes)
A .NET memory profiler may only show the .NET part of memory (400 MB) and omit the rest (5600 MB).
Task manager is not interested in .NET. It cares about physical RAM mostly, which is why Task Manager is not a good analytics tool in general. You don't want to analyze physical RAM, you want to analyze virtual memory.
To look for memory leaks, use Process Explorer and show the "Private Bytes" and "Virtual size" column. Process Explorer can also show you a graph over time per process.
How to proceed?
Forget about the unmanaged leak for a moment. Use a .NET profiler that has the capability of taking memory snapshots and allows you to see each individual object inside as well as a statistics.
Try to figure out the steps that it takes to create more leaks in a consistent way. Then
Take a snapshot
Repeat the leak procedure 10 times
Take a snapshot
Repeat the leak procedure another 10 times
Take a snaphot
Compare snapshot of step 1 and 3. Check for managed types that differ in multiples of 10. Compare snapshot of step 3 and 5. Check the same type again. It must be a multiple of 10. You can't leak 7 objects when you run a method 10 times.
Do a code review on the places where the affected types are used based on internal knowledge on the leak procedure (which methods are called) and the managed type. Make sure it's disposed or released properly.
My app displays a single high resolution image. When I display an image the task manager shows memory useage of 350Mb, but GetTotalMemory returns 2.5Mb, and CLRProfiler shows root memory useage of 2.0Mb
Why is loading a 24M pixel image into a BitmapImage object not registering in GetTotalMemory or CLRProfiler?
The .NET Framework classes that manipulate images, like System.Drawing.Bitmap and System.Windows.Media.Imaging.BitmapSource, are wrapper classes. They don't actually do the heavy lifting involved with imaging, they pass the job on to previously existing imaging apis in Windows. System.Drawing uses GDI+, a set of C functions that are pinvoked. System.Windows uses WIC, the Windows Imaging Component, a COM api.
Naturally, these image apis don't know anything about the GC heap. They were written in native C++ and use the memory management functions provided by the operating system. Like VirtualAlloc(), HeapAlloc() and MapViewOfFile().
Accordingly, you cannot see the address space consumed by this native code with managed memory profilers or GC class methods. Not something you'd ever want to try to find out either, there is a lot of this kind of pinvoke/COM interop buried in the .NET Framework. A decent memory profiler (i.e. not CLRProfiler) can give you an idea how much of this unmanaged address space is used, it however won't break it down well like it can for managed code. SysInternals' VMMap is a decent utility with the right price to get a look under the covers.
The reason I ask is that I have an application that (among other things) calls a MATLAB .NET component whenever data is written into a specific file. The component reads the file and creates an image out of the data contained within. This works fine.
However when I am using the underlying application to additionally process a "significant" amount of data and display the processed data in table, the call to MATLAB throws an out of memory exception, but only when I am processing this large amount of data.
Is this not an indication that the MATLAB process called will rely on the available memory of the application? I guess I just don't understand how the MATLAB memory works when being called from a .NET standpoint.
(I should also note that I call clear all before every call to the MATLAB function in an attempt to "start from scratch", but it fails regardless)
COM components built by Matlab Builder NE are in-process COM servers. This means that they are DLLs which are loaded into your application memory space. This means that the MCR, which is a kind of Matlab-Virtual-Machine is in your memory space.
I believe that .NET components should behave exactly the same.
It is entirely possible, and from what you've described, even likely that the MATLAB component is making use of unmanaged memory (memory that is not managed by the .NET garbage collector.) There is very little you can do about this, apart from ensuring that you only feed it expected data in expected quantities. You may also wish to create a support ticket with MATLAB, if you believe you are using it properly.
Never used MATLAB from C#, but as much I see it uses COM components to interact with the CLR world. You load the MATLAB unmanaged DLL's into your process memory heap. And considering that for CLR process on 32 bit machines you have approximately 1.2 GB memory space, so you go out of that available space.
Some interesting description of how the load of the unmanged COM component into the managed memory done, you can find here: Memory Management Of Unmanaged Component By CLR
I'm developing a C# application which seems to have a leak.
I've used memory profiler and found that my
private bytes keep increasing but Bytes in all Heaps do not, which means that probably it's a native memory leak
Now I'm stuck, how do I find memory leaks in native code ?
First, if you have a dump of the leaking process, you can open it in WinDbg and issue the command :
!address -summary
if RegionUsageHeap is large, then it should be a native memory leak
if RegionUsageIsVAD, then it should be a .NET memory leak.
If this is a native leak, then you have 2 options :
Use DebugDiag : when prompt, choose 'Native Memory leak and Handle leak', choose the process you want to diagnose, and start working with the application until you experiment the memory leak. When complete, generate a full dump of the application (right click on the leak rule and select Full user dump). You can then analyze the generated dump (you'll need to have the symbols properly configured for this to work efficiently) : on 'advanced analysis' tab, select 'Memory pressure analyzers', open the dump file and press 'Start analysis'. This produces and html report you can analyze. You can refer to this page for a detailed walkthrough.
Use Application Verifier / WinDbg. In application verifier, select your application (.exe). In tests page, be sure Basics/Heaps is selected. In the lower pane, be sure 'Traces' is set to true. Once the configuration saved, re-run the application and generate a full dump when the leak occurs. Don't forget to clean application flags after the dump is generated. Then you can open the dump from within WinDbg, and investigate the leak with the help of '!heap' command. In particular, '!heap -l' will give you a list of leaked blocks, '!heap -p -a ' will show the details of a block, including the call stack of allocation.
If this is a .NET leak, there are third party tools to troubleshoot it. Starting from version 1.2, DebugDiag is also enable to perform .NET memory leak analysis (never tried this however).
Diagnosing native memory leaks in a managed application is (at least initially) very similar to diagnosing memory leaks in any other native application.
The way I normally approach these problems is to get the process to leak a large amount of memory, take a full process dump and then examine the dump to see what is using the most memory. For example if your process has a normal / initial private bytes of ~20MB but you can get your process to leak memory until it has ~200MB of private bytes, then there is a good chance that ~180MB of that memory is leaked - generally speaking whatever has the most memory allocated is where you should start looking.
Microsoft have a very useful tool called DebugDiag - initially developed for use in diagnosing memory leaks in IIS it is a very vesatile tool and very handy when dealing with memory issues. If you give it a crash dump it will perform some analysis and should (at the very least) tell you what module has allocated all of that memory, you can then start looking more specifically at how that module is used.
It's hard to give you a solid response without more information, but it sounds like the lib you are trying to use has a memory leak. You'll need to the lib with the appropriate tools, depending on the language it was written in. If you don't have the lib's source, contact the developers and have them fix the leak.
If you can post the name of the library and some of your source code (as well as the native method signatures), we might be able to give you some more specific advice.
private bytes in heaps managed by .net framework, you need use professional tool to analysis your source. such like use red gate memory profiler, find object created but not being disposed.
Usually I had the best results when hunting memory leaks using the ANTS Memory Profiler.
(Or other tools, personally I had best experiences with ANTS)
I know it's a stupid question, but when i run my program which contains threading , i find that the memory(VM, and Memory used) by the application in the Task manager is increasing regarding that my threads are stopped at that moment, so i wonder if there's any way to know the source of this, or just know at which line the application is compiling now? .
i used the thread watch window but i didn't get any useful information ragarding that.
If you're sure your program is using excessive memory then getting your hands on a memory profiler would be a good first approach.
You can use the CLR Profiler application to get snapshots of your memory consumption. Then you'll be able to identify the source of your issue.
CLR Profiler is free and available here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=be2d842b-fdce-4600-8d32-a3cf74fda5e1
Its worth noting that the memory profiler does not directly map to the memory(VM, and Memory used) or Working Set counter in Task Manager.
The working set of a program is a
collection of those pages in its
virtual address space that have been
recently referenced. It includes both
shared and private data. The shared
data includes pages that contain all
instructions your application
executes, including those in your DLLs
and the system DLLs. As the working
set size increases, memory demand
increases.
If memory serves the Memory Profiler will look at private bytes are which represent the actual memory that you are using.
Also See the section A comment on performance counters and how not to use taskmanager in this article from Tess Ferrandez