C# Memory leak, tracking techinques and tools - c#

The app I am writing is suffering quite dramatically from a memory leak. Pretty much the entire object model is staying in memory when a user closes down a loaded project. The way I know this is because closing a project in my app barely effects the memory usage in the task manager and then opening a new project almost makes it double each time. I downloaded jetBrain's dotTrace Memory 3.5 but there is little (none) instructions for use. I kinda figured out how to use it and it shows that the whole object model is still in memory when i take a snapshot after a project has been closed down. Trawling through my projectClose code I can see no reason for this. Does anyone know of anything in particular that normally causes memory leaks in C# or of any tools or techniques for tracking down the problem. Its all well and good having an app that shows my entire object model is still loaded into memory but it doesnt show me what object or variable is storing it. Thanks in advance.

Firstly, investigate whether the leak could be due to registration of event handlers as this is one of the easiest ways to accidentally root your objects. For example, if you have a class 'Bob' that adds one of its methods 'OnSomeEvent' as a delegate to an event that is raised by a long-living component of your system (e.g. 'UserSettingsManager') then objects of class 'Bob' will not be collected as they are kept alive by virtue of being an event handler (i.e. event callbacks are not weak references).
As an alternative to the commercial tools, there is an extension to the Windows debugger called SoS (Son of Strike) that you can use to debug managed applications. However, it is not the faint-hearted as it is a low-level, command-line tool that requires a lot of up-front learning. It is very powerful, however, and does not struggle so much with larger processes (in terms of heap consumption) as the commercial tools do.
In terms of the commercial profilers, I've had good experiences with Redgate's ANTS Memory Profiler (but I have had colleagues who hate it) so it may be worth trialing that.

Probably the most common cause of managed memory leaks is unsubscribed event handlers.
There are a number of useful tools for tracking errors like this. Personally I like ANTS Memory Profiler and WinDbg/SOS. You want to find out what is rooting the object graphs. With WinDbg/SOS there's a !gcroot command, that will tell you the roots of any given object.
Check Tess' blog for guides on how to troubleshoot memory problems using WinDbg/SOS.

try these:
http://blogs.msdn.com/b/tess/archive/2010/01/14/debugging-native-memory-leaks-with-debug-diag-1-1.aspx
http://www.microsoft.com/downloads/en/details.aspx?FamilyID=28bd5941-c458-46f1-b24d-f60151d875a3&displaylang=en

Related

Determining where object allocations for objects on the heap occurred

Is there any tool such that it can get a heap dump from a running application and determine/group objects by where in source code they were created?
With no changes to source code and ideally something free.
What about .NET Memory Profiler from ANTS for example.
Maybe CLR Profiler.
The information is not available if you create a memory dump. In order to gather this, you have to monitor the process as it is running. You could launch the application via WinDbg and set breakpoints on all the constructors you're interested in (hopefully you don't want to look at each and every object).
If you create the breakpoint, so it dumps the stack you will have the point of creation for the object. However, keep in mind that the objects may move around during GC, which will make parring objects with stacks difficult (or even impossible in some cases).
Since your question is tagged with performance and profiling, I gather that you want to reduce memory allocations. Why not just look at the numbers of objects created (or possibly look at the largest objects created) by looking at the heap. Then go through the source code and figure out where such instances are created.
As others suggested memory profilers, Memprofiler is definitely the most advanced one (I've tried all existing .NET profilers). It has a 14 day trial.
You need a .NET memory profiler. These tools allow you to follow the object graphs on the garbage collected heap and can be very useful in identifying the sources of memory leaks. While they may not necessarily tell you the method where an object was created they will tell which instances of which classes are holding on to the objects and allow you to take differences of snap shots of the gc heap. They don't require modifications to source code. You may want to have a look at
What Are Some Good .NET Profilers?
Our QA teams use http://www.jetbrains.com/profiler/ for this kind of thing here when we run into bottlenecks. I'm pretty sure it will give you a list of allocations by method call. I'll go install it and check :)
Good old windbg + sos + pdb will make the dumping.
As for the "where in source code they were created" part - is impossible without instrumentation or injection.
The SOS Debugging Extension
How to use : http://msdn.microsoft.com/en-us/library/yy6d2sxs.aspx

.NET : Monitoring Objects Lifetime (Birth/Death/Memory)

Issue : What is the best way to have a comprehensive view of birth/death/mem usage of ALL objects created during an application lifetime? (graphic report would be better)
Why such a question :
Among many others, the idea behind that, is to reveal long-living objects that may never be collected by the garbage collector or cause memory trouble (such as heap/stack issues or so), and provide valuable information to efficiently manage object lifecycles (I actually just spent a whole night debugging a multithreaded application to finally notice that a "believed to be disposed/renewed" object was in fact still alive and smashing up the server memory.)
VS2010 Performance Wizard & Profiler might be a good starter...
I stumbled upon few ways to do this programmatically but it involved wrapping up objects individually (painstaking and not code-seamless)
I'm looking for something that would look like this :
Application START[-----------------------------------------------------------]END
Object 1 [---------------------------]
Object 2 [---------------------------]
Object 3 [-----------------------------------------------------]
Mika,
As you noted you can use the VS2010 Profiler (if you have Visual Studio Premium or Ultimate). For more information see the MSDN page about collecting 'Object Lifetime' information.
Beware that this profiling mode is pretty heavyweight compared to other profiling modes and you may find the collected VSP file is quite large unless you have a fairly focussed scenario that you are profiling.
The profiling report will show the information in tabular form, but you can copy the data into Excel or another program of your choice for further analysis/charting.
Disclaimer: I work on the Visual Studio profiler.
There are some tools which can do that but not as easily as graph. You will need to learn a bit on those tools.
Free:
CLR Profiler
http://msdn.microsoft.com/en-us/library/ff650691.aspx
WinDbg:
http://www.microsoft.com/whdc/devtools/debugging/default.mspx
Use the SOS or SOSEX extension with Windbg to profile .NET code.
Commercial :
Red Gate Ants Profiler:
http://www.red-gate.com/products/dotnet-development/ants-performance-profiler/

What is the best way to find a process's memory allocations in terms of C# objects

I have written various C# console based applications, some of them long running some not, which can over time have a large memory foot print. When looking at the windows perofrmance monitor via the task manager, the same question keeps cropping up in my mind; how do I get a break down of the number objects by type that are contributing to this footprint; and which of those are f-reachable and those which aren't and hence can be collected. On numerous occasions I've performed a code inspection to ensure that I am not unnecessarily holding on objects longer than required and disposing of objects with the using construct. I have also recently looked at employing the CG.Collect method when I have released a large number of objects (for example held in a collection which has just been cleared). However, I am not so sure that this made that much difference, so I threw that code away.
I am guessing that there are tools in sysinternals suite than can help to resolve these memory type quiestions but I am not sure which and how to use them. The alternative would be to pay for a third party profiling tool such as JetBrains dotTrace; but I need to make sure that I've explored the free options first before going cap in hand to my manager.
There is the CLR Profiler that lets you review various object graphs (I've never used it):
http://www.microsoft.com/downloads/details.aspx?FamilyID=86ce6052-d7f4-4aeb-9b7a-94635beebdda&displaylang=en
Of course, ANTS Profiler (free trial) is an often recommended profiler. You shouldn't need to manually garbage collect, the GC is a very intelligently built solution that will likely be more optimal than any manual calls you do. A better approach is to be minimalist with the number of objects you keep in memory - and be rid of memory-heavy objects as soon as possible if memory is priority.

Strategies For Tracking Down Memory Leaks When You've Done Everything Wrong

My program, alas, has a memory leak somewhere, but I'll be damned if I know what it is.
Its job is to read in a bunch of ~2MB files, do some parsing and string replacement, then output them in various formats. Naturally, this means a lot of strings, and so doing memory tracing shows that I have a lot of strings, which is exactly what I'd expect. The structure of the program is a series of classes (each in their own thread, because I'm an idiot) that acts on an object that represents each file in memory. (Each object has an input queue that uses a lock on both ends. While this means I get to run this simple processing in parallel, it also means I have multiple 2MB objects sitting in memory.) Each object's structure is defined by a schema object.
My processing classes raise events when they've done their processing and pass a reference to the large object that holds all my strings to add it to the next processing object's queue. Replacing the event with a function call to add to the queue does not stop the leak. One of the output formats requires me to use an unmanaged object. Implementing Dispose() on the class does not stop the leak. I've replaced all the references to the schema object with an index name. No dice. I got no idea what's causing it, and no idea where to look. The memory trace doesn't help because all I see are a bunch of strings being created, and I don't see where the references are sticking in memory.
We're pretty much going to give up and roll back at this point, but I have a pathological need to know exactly how I messed this up. I know Stack Overflow can't exactly comb my code, but what strategies can you suggest for tracking this leak down? I'm probably going to do this in my own time, so any approach is viable.
One technique I would try is to systematically reduce the amount of code you need to demonstrate the problem without making the problem go away. This is informally known as "divide and conquer" and is a powerful debugging technique. Once you have a small example that demonstrates the same problem, it will be much easier for you to understand. Perhaps the memory problem will become clearer at that point.
There is only one person who can help you. That person's name is Tess Ferrandez. (hushed silence)
But seriously. read her blog (the first article is pretty pertinent). Seeing how she debugs this stuff will give you a lot of deep insight into knowing what's going on with your problem.
I like the CLR Profiler from Microsoft. It provides some great tools for visualizing the managed heap and tracking down leaks.
I use the dotTrace profiler for tracking down memory leaks. It's a lot more deterministic than methodological trial and error and turns up results a lot faster.
For any actions that the system performs, I take a snapshot then run a few iterations of the function, then take another snapshot. Comparing the two will show you all the objects that were created in between but were not freed. You can then see the stack frame at the point of their creation, and therefore work out what instances are not being freed.
Get this: http://www.red-gate.com/Products/ants_profiler/index.htm
The memory and performance profiling are awesome. Being able to actually see proper numbers instead of guessing makes optimisation pretty fast. I've used it quite a bit at work for reducing the memory footprint of our main app.
Add code to the constructor of the
unamanaged object to log when it's
onstructed, and sort a unique ID.
Use that unique ID when the object
is destroyed again, and you can at
least tell which ones are going
astray.
Grep the code for every place you
construct a new object; follow that
code path to see if you have a
matching destroy.
Add chaining pointers to the
constructed objects, so you have a
link to the object constructed
before and after the current one. Then you can sweep through them later.
Add reference counters.
Is there a "debug malloc" available?
The managed debugging add in SoS (Son of Strike) is immensely poweful for tracking down managed memory 'leaks' since they are, by definition discoverable from the gc roots.
It will work in WinDbg or Visual studio (though it is in many respects easier to use in WinDbg)
It is not at all easy to get to grips with. Here is a tutorial
I would second the recommendation to check out Tess Fernandez's blog too.
How do you know for a fact that you actually have a memory leak?
One other thing: You write that your processing classes are using events. If you have registered an event handler it will keep the object that owns the event alive - i.e. the GC cannot collect it. Make sure you de-register all event handlers if you want your objects to be garbage collected.
Be careful how you define "leak". "Uses more memory" or even "uses too much memory" is not the same as "memory leak". This is especially true in a garbage-collected environment. It may simply be that GC hasn't needed to collect the extra memory you're seeing used. Also be careful about the difference between virtual memory use and physical memory use.
Finally not all "memory leaks" are caused by "memory" sorts of issues. I was once told (not asked) to fix an urgent memory leak that was causing IIS to restart frequently. In fact, I did profiling and found I was using a lot of strings through the StringBuilder class. I implemented an object pool (from an MSDN article) for the StringBuilders, and memory usage went down substantially.
IIS still restarted just as frequently. This was because there was no memory leak. Instead, there was unmanaged code that claimed to be thread-safe but was not. Using it in a web service (multiple threads) caused it to write all over the C Runtime Library heap. Since nobody was looking for unmanaged exceptions, nobody saw this until I happened to do some profiling with AQtime from Automated QA. It happens to have an events window, that happened to display the cries of pain from the C Runtime Library.
Placed locks around the calls to the unmanaged code, and the "memory leak" went away.
If your unmanaged object really is the cause of the leak, you may want to have it call AddMemoryPressure when it allocates unmanaged memory and RemoveMemoryPressure in Finalize/Dispose/where ever it deallocates the unmanaged memory. This will give the GC a better handle on the situation, because it may not realize there's a need to schedule collection otherwise.
You mentioned that your using events. Are you removing the handlers from those events when your done with your object? I've found that 'loose' event handlers will cause a lot of memory leak problems if you add a bunch of handlers without removing them when your done.
The best memory profiling tool for .Net is this:
http://memprofiler.com
Also, while I'm here, the best performance profiler for .Net is this:
http://www.yourkit.com/dotnet/download/index.jsp
They are also great value for money, have low overhead and are easy to use. Anyone serious about .Net development should consider both of these a personal investment and purchase immediately. Both of them have a free trial.
I work on a real time game engine with over 700k lines of code written in C# and have spent hundreds of hours using both these tools. I have used the Sci Tech product since 2002 and YourKit! for the last three years. Although I've tried quite a few of the others I have always returned to these.
IMHO, they are both absolutely brilliant.
Similar to Charlie Martin, you can do something like this:
static unigned __int64 _foo_id = 0;
foo::foo()
{
++_foo_id;
if (_foo_id == MAGIC_BAD_ALLOC_ID)
DebugBreak();
std::werr << L"foo::foo # " << _foo_id << std::endl;
}
foo::~foo()
{
--_foo_id;
std::werr << L"foo::~foo # " << _foo_id << std::endl;
}
If you can recreate it, even once or twice with the same allocation id, this will let you look at what is happening right then and there (obviously TLS/threading has to be handled as well, if needed, but I left it out for clarity).

What Are Some Good .NET Profilers?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What profilers have you used when working with .net programs, and which would you particularly recommend?
I have used JetBrains dotTrace and Redgate ANTS extensively. They are fairly similar in features and price. They both offer useful performance profiling and quite basic memory profiling.
dotTrace integrates with Resharper, which is really convenient, as you can profile the performance of a unit test with one click from the IDE. However, dotTrace often seems to give spurious results (e.g. saying that a method took several years to run)
I prefer the way that ANTS presents the profiling results. It shows you the source code and to the left of each line tells you how long it took to run. dotTrace just has a tree view.
EQATEC profiler is quite basic and requires you to compile special instrumented versions of your assemblies which can then be run in the EQATEC profiler. It is, however, free.
Overall I prefer ANTS for performance profiling, although if you use Resharper then the integration of dotTrace is a killer feature and means it beats ANTS in usability.
The free Microsoft CLR Profiler (.Net framework 2.0 / .Net Framework 4.0) is all you need for .NET memory profiling.
2011 Update:
The Scitech memory profiler has quite a basic UI but lots of useful information, including some information on unmanaged memory which dotTrace and ANTS lack - you might find it useful if you are doing COM interop, but I have yet to find any profiler that makes COM memory issues easy to diagnose - you usually have to break out windbg.exe.
The ANTS profiler has come on in leaps and bounds in the last few years, and its memory profiler has some truly useful features which now pushed it ahead of dotTrace as a package in my estimation. I'm lucky enough to have licenses for both, but if you are going to buy one .Net profiler for both performance and memory, make it ANTS.
Others have covered performance profiling, but with regards to memory profiling
I'm currently evaluating both the Scitech .NET Memory Profiler 3.1 and ANTS Memory Profiler 5.1 (current versions as of September 2009). I tried the JetBrains one a year or two ago and it wasn't as good as ANTS (for memory profiling) so I haven't bothered this time. From reading the web sites it looks like it doesn't have the same memory profiling features as the other two.
Both ANTS and the Scitech memory profiler have features that the other doesn't, so which is best will depend upon your preferences. Generally speaking, the Scitech one provides more detailed information while the ANTS one is really incredible at identifying the leaking object. Overall, I prefer the ANTS one because it is so quick at identifying possible leaks.
Here are the main the pros and cons of each from my experience:
Common Features of ANTS and Scitech .NET Memory Profiler
Real-time analysis feature
Excellent how-to videos on their web sites
Easy to use
Reasonably performant (obviously slower than without the profiler attached, but not so much you become frustrated)
Show instances of leaking objects
Basically they both do the job pretty well
ANTS
One-click filters to find common leaks including: objects kept alive only by event handlers, objects that are disposed but still live and objects that are only being kept alive by a reference from a disposed object. This is probably the killer feature of ANTS - finding leaks is incredibly fast because of this. In my experience, the majority of leaks are caused by event handlers not being unhooked and ANTS just takes you straight to these objects. Awesome.
Object retention graph. While the same info is available in Scitech, it's much easier to interpret in ANTS.
Shows size with children in addition to size of the object itself (but only when an instance is selected unfortunately, not in the overall class list).
Better integration to Visual Studio (right-click on graph to jump to file)
Scitech .NET Memory Profiler
Shows stack trace when object was allocated. This is really useful for objects that are allocated in lots of different places. With ANTS it is difficult to determine exactly where the leaked object was created.
Shows count of disposable objects that were not disposed. While not indicative of a leak, it does identify opportunities to fix this problem and improve your application performance as a result of faster garbage collection.
More detailed filtering options (several columns can be filtered independently).
Presents info on total objects created (including those garbage collected). ANTS only shows 'live' object stats. This makes it easier to analyze and tune overall application performance (eg. identify where lots of objects being created unnecessarily that aren't necessarily leaking).
By way of summary, I think ANTS helps you find what's leaking faster while Scitech provides a bit more detail about your overall application memory performance and individual objects once you know what to look at (eg. stack trace on creation). If the stack trace and tracking of undisposed disposable objects was added to ANTS I wouldn't see the need to use anything else.
I recently discovered EQATEC Profiler http://www.eqatec.com/tools/profiler. It works with most .NET versions and on a bunch of platforms. It is easy to use and parts of it is free, even for commercial use.
[Full Disclosure]
While not yet as full-featured as some of the other .NET memory profilers listed here, there is a new entry on the market called JustTrace. It's made by Telerik and it's primary goal is to make tracing/profiling easier and faster to do for all types of apps (web/Silverlight/desktop).
If you've ever found profiling and optimization intimidating or slow with other tools, then JustTrace might be worth a look.
Don't forget nProf - a prefectly good, freeware profiler.
I have found dotTrace Profiler by JetBrains to be an excellent profiling tool for .NET and their ASP.NET mode is quality.
ANTS Profiler. I haven't used many, but I don't really have any complaints about ANTS. The visualization is really helpful.
AutomatedQA AQTime for timing and SciTech MemProfiler for memory.
If you're looking for something quick, easy, and free, http://code.google.com/p/slimtune/ seems to do the job fine.
I've been working with JetBrains dotTrace for WinForms and Console Apps (not tested on ASP.net yet), and it works quite well:
They recently also added a "Personal License" that is significantly cheaper than the corporate one. Still, if anyone else knows some cheaper or even free ones, I'd like to hear as well :-)
Don't forget the awesome scitech .net memory profiler
It's great for tracking down why your .net app is running out of memory.
I would add that dotTrace's ability to diff memory and performance trace sessions is absolutely invaluable (ANTS may also have a memory diff feature, but I didn't see a performance diff).
Being able to run a profiling session before and after a bug fix or enhancement, then compare the results is incredibly valuable, especially with a mammoth legacy .NET application (as in my case) where performance was never a priority and where finding bottlenecks could be VERY tedious. Doing a before-and-after diff allows you to see the change in call count for each method and the change in duration for each method.
This is helpful not only during code changes, but also if you have an application that uses a different database, say, for each client/customer. If one customer complains of slowness, you can run a profiling session using their database and compare the results with a "fast" database to determine which operations are contributing to the slowness. Of course there are many database-side performance tools, but sometimes I really helps to see the performance metrics from the application side (since that's closer to what the user's actually seeing).
Bottom line: dotTrace works great, and the diff is invaluable.
AQTime is reasonable, but has a bit of a learning curve and isn't as easy to use as the built in one in Team Suite
In the past, I’ve used the profiler that ships with Visual Studio Team System.
The current release of SharpDevelop (3.1.1) has a nice integrated profiler. It's quite fast, and integrates very well into the SharpDevelop IDE and its NUnit runner. Results are displayed in a flexible Tree/List style (use LINQ to create your own selection). Doubleclicking the displayed method jumps directly into the source code.
I've worked with RedGate's profiler in the past. Did the job for me.
Haven't tried it myself, but maybe dotTrace? Their ReSharper application is certainly a good one. Maybe dotTrace is too :)
I doubt that the profiler which comes with Visual Studio Team System is the best profiler, but I have found it to be good enough on many occasions. What specifically do you need beyond what VS offers?
EDIT: Unfortunately it is only available in VS Team System, but if you have access to that it is worth checking out.
The latest version of ANTS memory profiler (I think it's 5) simply rocks!!! I was haunting a leak using WinDbg and SOS since it proved to be the best way before, then I tried ANTS and I got it in minutes. Really a wonderful piece of software.
I would like to add yourkit java and .net profiler, I love it for Java, haven't tried .NET version though.
Unfortunate most of the profilers I tried failed when used with tail calls, most notably ANTS. I just end up writing my own. There is a simple implementation on CodeProject that you can use as a base.
Intel® VTune™ Performance Analyzer for quick sampling
I must bring an amazing tool to your notice which i have used sometime back. AVICode Interceptor Studio. In my previous company we used this wonderful tool to profile the webapplication (This is supposed to be the single largest web application in the world and the largest civilian IT project ever done). The performance team did wonders with the help of this magnificent tool. It is a pain to configure it, but that is a one time activity and i would say it is worth the time. Checkout this page for details.
Thanks,
James
For me SpeedTrace is the best tool on the market because it does not only help you to find bottlenecks inside your applications. It also helps you in troubleshooting scenarios to find out why your application was crashing, your setup did not install, your application hung up, your application performance is sometimes poor depending on the data input, e.g. to identify slow db transactions.
I've been testing Telerik's JustTrace recently and although it is well away from a finished product the guys are going in the right direction.
If Licensing is an issue you could try WINDBG for memory profiling
The NuMega True Time profiler lives on in DevPartner Studio by Micro Focus. It provides line and method level detail for .NET apps requiring only PDBs, no source needed (but it helps.) It can discriminate between algorithmically heavy routines versus those with long I/O waits using our proprietary per thread kernel mode timing driver. Version 10.5 ships with new 64-process support on February 4, 2011. Shameless plug: I work on the DevPartner product line. Follow up at http://www.DevPartner.com for news of the 10.5 launch.
Disclaimer: I am the Product Manager for DevPartner at Micro Focus.
I've found plenty of problems in a big C# app using this.
Usually the problem occurs during startup or shutdown as plugins are being loaded, and big data structures are being created, destroyed, serialized, or deserialized. Often they are created and initialized more than once, and change handlers get added multiple times, further compounding the problem.
In cases like this, the program can be so sluggish that only 2 samples are sufficient to pinpoint the guilty method / function / property call sites.
We selected YourKit Profiler for .NET in my company as it was the best value (price vs. feature). For a small company that wants to have flexible licensing (floating licenses) it was a perfect choice - ANTS was developer seat locket at the time.
Also, it provided us with the ability to attach to the running process which was not possible with dotTrace. Beware though that attaching is not the best option as everything .NET will slow down, but this was the only way to profile .NET applications started by other processes.
Feature wise, ANTS and dotTrace were better - but in the end YourKit was good enough.
If you're on ASP.NET MVC, you can try MVCMiniProfiler (http://benjii.me/2011/07/using-the-mvc-mini-profiler-with-entity-framework/)

Categories

Resources