During Runtime I find that there is a very big memory usage by my application..
But It seems that I only use 3~4 MemoryStreams one of which is sometimes full of 81 Mb..
The others are mainly 20 mb, 3 mb and 1mb containers...
But still there is 525.xx MB memory usage by the application...
I tried using using(...) statements also but without any luck..
So, I am asking here for the most efficient way to cut down memory leaks.
In managed .NET apps, you don't generally have memory "leaks" in the original sense of the word, unless you are allocating unmanaged resource handles and not disposing of them correctly. But that doesn't sound like what you're doing.
More likely is that you are holding on to references to objects that you no longer need, and this is keeping the memory "alive" longer than you are expecting.
For example, if you put 5MB of data into a memory stream, and then assign that memory stream to a static field, then the 5MB will never go away for the lifetime of the application. You need to assign null to the static field that refers to the memory stream when you no longer need what it points to so that the garbage collector will release and reclaim that 5MB of memory.
Similarly, local variables are not released until the function exits. If you allocate a lot of memory and assign it to a local variable, and then call another function that runs for hours, the local variable will be kept alive the whole time. If you don't need that memory anymore, assign null to the local variable.
How are you determining that your app has a memory leak? If you are looking at the process virtual memory allocation shown by Task Manager, that is not very accurate. An application's memory manager may allocate large chunks of memory from the OS and internally free them for other uses within the application without releasing them back to the OS.
Use common sense practices. Call dispose or close as appropriate and assign null to variables as soon as you no longer need their contents.
Just because the garbage collected environment will let you be lazy doesn't mean you shouldn't pay attention to memory allocation and deallocation patterns in your code.
Your definition of memory leak seems to be unusual... Following code will produce exactly the effect you are observing, but it rarely called memory leak:
var data = new byte[512*1024*1024];
data = null;
But you may actually have legitimate leaks. Memory profilers will easily show them, but huge ones can be tracked down by code reviews. If you already know that you have small number of memory streams - check if you don't keep them alive by storing in some list or simply in member variable. Also check if your large arrays are not staying alive for similar reasons.
Related
There is a possible memory leak in this statement for a very large string (tempText can grow as big as ~10mb).
string strXML = new string(tempText.Where(ch => XmlConvert.IsXmlChar(ch)).ToArray());
Memory allocated for strXML doesn't get released even after exiting the function. And I've to call this function multiple times. Any possible solution, without having this string as a class variable?
I'm not very familiar with C# memory management, can someone shed some light on this issue?
The garbage collector doesn't collect objects the instance that their lifetime ends. It executes periodically, based on it's perceived need, to free up memory. The string will be eventually collected at some indeterminate point in time after it is long longer referenced by any rooted object.
When you build up a large object, it will remain in memory for much longer than other small objects.
Read up on the large object heap and generation 2 garbage collection... it gets technical but those two terms should suffice to point out what's going on here.
That's why you are not seeing the garbage collector reclaim that memory as quickly as you'd like.
In order to overcome this, either allocate work buffers once and reuse them, or work on the data in smaller chunks.
I am receiving a very large list as a method argument and would like to remove it from memory after it is used. Normally I would let the GC do its thing, but I need to be very careful of memory usage in this app.
Will this code accomplish my goal? I've read a lot of differing opinions and am confused.
public void Save(IList<Employee> employees)
{
// I've mapped the passed-in list
var data = Mapper<Employee, EmployeeDTO>.MapList(employees);
// ?????????????
employees = null;
GC.Collect();
// Continues to process very long running methods....
// I don't want this large list to stay in memory
}
Maybe I should use another technique that I'm not aware of?
If the list is not used anymore the GC will automatically collect it when available memory is an issue.
However, if the caller uses the list after passing it to your function then the GC won't collect it even if you set it to null (all you have is a reference to the list - you can't do anything about other objects that hold references as wel).
Unless you have a measurable problem don't try and outsmart the GC.
This is not a direct answer to the question but some ideas how to handle the low-memory situations the poster (#Big Daddy) referred to.
If you're running into low memory situations on an x64 platform with 8 GB of memory, you should determine if your application is responsible for it. If it is, then run a memory profiler (CLR Profiler or something else or even get a full user dump and run WinDbg on it) to see what allocates the memory. It's possible that you have some objects that are not used anymore but are still referenced somewhere - this is not a true memory leak but it's memory that's not freed up in your application - most decent profilers will identify big objects (or objects with a lot of instances) along with their types.
I find it hard to believe that the list passed to this Save function would stress a server with 8 GB of memory but we don't know how much free memory is available to the process and what kind of a process it is (IIS, desktop, etc.)
If Save is called on several threads with huge inputs, it can potentially lead to memory stress but even then, it's not very likely and I'd check various counters and profile data to see when and why memory stress happens.
I have a class that has a byte array holding from 1,048,576 bytes up to 134,217,728 bytes.
In the dispose void I have the array set to null and the method calling the dispose calls GC.Collect after that.
If I dispose right away I will get my memory back, but if I wait like 10 hours and dispose, memory usage doesn't change.
Memory usage is based on OS memory allocation. It may be freed immediately, it may not be. This depends on OS utilization, application, etc. You free it in the runtime, but this doesn't mean the OS alway gets it back. I have to think heres a determination based on memory patterns (ie time here) that affects this calculation of when to return it or not. This was addressed here:
Explicitly freeing memory in c#
Note: if the memory isn't released, it doesn't automatically means it used. It's up to the CLR whether it's releases chuncks of memory or not, but this memory isn't wasted.
That said, if you want a technical explanation, you'll want to read litterature about the Large Object Heap: http://msdn.microsoft.com/en-us/magazine/cc534993.aspx
Basically, it's a zone of memory where very large objects (more than 85kB) are allocated. It differs from the other zones of memory in that it's never compacted, and thus can become fragmented. I think what happens in your case is:
Case 1: you allocate the object and immediately call GC.Collect. The object is allocated at the end of the heap, then freed. The CLR sees a free segment at the end of the heap and releases it to the OS.
Case 2: you allocate the object and wait for a while. In the mean time, an other object is allocated in the LOH. Now, your object isn't the last one anymore. Then, when you call GC.Collect, your object is erased, but there's still the other object(s) at the end of the memory segment. So the CLR cannot release the memory to the OS.
Just a guess based on my knowledge of memory management in .NET. I may be completely wrong.
Your findings are not unusual, but it doesn't mean that anything is wrong either. In order to be collected, something must prompt the GC to collect (often an attempted allocation). As a result, you can build an app that consumes a bunch of memory, releases it, and then goes idle. If there is no memory pressure on the machine, and if your app doesn't try to do anything after that, the GC won't fire (because it doesn't need to). Once you get busy, the GC will kick in and do its job. This behavior is very commonly mistaken for a leak.
BTW:
Are you using that very large array more than once? If so, you might be better off keeping it around and reusing it. Reason: any object larger than 85,000 bytes is allocated on the Large Object Heap. That heap only gets GC'd on Generation 2 collections. So if you are allocating and reallocating arrays very often, you will be causing a lot of Gen 2 (expensive) collections.
(note: that doesn't mean that there's a hard and fast rule to always reuse large arrays, but if you are doing a lot of allocation/deallocation/allocation of the array, you should measure how much it helps if you re-use).
I am working on a relatively large solution in Visual Studio 2010. It has various projects, one of them being an XNA Game-project, and another one being an ASP.NET MVC 2-project.
With both projects I am facing the same issue: After starting them in debug mode, memory usage keeps rising. They start at 40 and 100MB memory usage respectively, but both climb to 1.5GB relatively quickly (10 and 30 minutes respectively). After that it would sometimes drop back down to close to the initial usage, and other times it would just throw OutOfMemoryExceptions.
Of course this would indicate severe memory leaks, so that is where I initially tried to spot the problem. After searching for leaks unsuccesfully, I tried calling GC.Collect() regularly (about once per 10 seconds). After introducing this "hack", memory usage stayed at 45 and 120MB respectively for 24 hours (until I stopped testing).
.NET's garbage collection is supposed to be "very good", but I can't help suspecting that it just doesn't do its job. I have used CLR Profiler in an attempt to troubleshoot the issue, and it showed that the XNA project seemed to have saved a lot of byte arrays I was indeed using, but to which the references should already be deleted, and therefore collected by the garbage collector.
Again, when I call GC.Collect() regularly, the memory usage issues seem to have gone. Does anyone know what could be causing this high memory usage? Is it possibly related to running in Debug mode?
After searching for leaks unsuccesfully
Try harder =)
Memory leaks in a managed language can be tricky to track down. I have had good experiences with the Redgate ANTS Memory Profiler. It's not free, but they give you a 14 day, full-featured trial. It has a nice UI and shows you where you memory is allocated and why these objects are being kept in memory.
As Alex says, event handlers are a very common source of memory leaks in a .NET app. Consider this:
public static class SomeStaticClass
{
public event EventHandler SomeEvent;
}
private class Foo
{
public Foo()
{
SomeStaticClass.SomeEvent += MyHandler;
}
private void MyHandler( object sender, EventArgs ) { /* whatever */ }
}
I used a static class to make the problem as obvious as possible here. Let's say that, during the life of your application, many Foo objects are created. Each Foo subscribes to the SomeEvent event of the static class.
The Foo objects may fall out of scope at one time or another, but the static class maintains a reference to each one via the event handler delegate. Thus, they are kept alive indefinitely. In this case, the event handler simply needs to be "unhooked".
...the XNA project seemed to have saved a lot of byte arrays I was indeed using...
You may be running into fragmentation in the LOH. If you are allocating large objects very frequently they may be causing the problem. The total size of these objects may be much smaller than the total memory allocated to the runtime, but due to fragmentation there is a lot of unused memory allocated to your application.
The profiler I linked to above will tell you if this is a problem. If it is, you will likely be able to track it down to an object leak somewhere. I just fixed a problem in my app showing the same behavior and it was due to a MemoryStream not releasing its internal byte[] even after calling Dispose() on it. Wrapping the stream in a dummy stream and nulling it out fixed the problem.
Also, stating the obvious, make sure to Dispose() of your objects that implement IDisposable. There may be native resources lying around. Again, a good profiler will catch this.
My suggestion; it's not the GC, the problem is in your app. Use a profiler, get your app in a high memory consumption state, take a memory snapshot and start analyzing.
First and foremost, the GC works, and works well. There's no bug in it that you have just discovered.
Now that we've gotten that out of the way, some thoughts:
Are you using too many threads?
Remember that GC is non deterministic; it'll run whenever it thinks it needs to run (even if you call GC.Collect().
Are you sure all your references are going out of scope?
What are you loading into memory in the first place? Large images? Large text files?
Your profiler should tell you what's using so much memory. Start cutting at the biggest culprits as much as you can.
Also, calling GC.Collect() every X seconds is a bad idea and will unlikely solve your real problem.
Analyzing memory issues in .NET is not a trivial task and you definitely should read several good articles and try different tools to achieve the result. I end up with the following article after investigations: http://www.alexatnet.com/content/net-memory-management-and-garbage-collector You can also try to read some of Jeffrey Richter's articles, like this one: http://msdn.microsoft.com/en-us/magazine/bb985010.aspx
From my experience, there are two most common reasons for Out-Of-Memory issue:
Event handlers - they may hold the object even when no other objects referencing it. So ideally you need to unsubscribe event handlers to destroy the object automatically.
Finalizer thread is blocked by some other thread in STA mode. For example, when STA thread does a lot of work the other threads are stopped and the objects that are in the finalization queue cannot be destroyed.
Edit: Added link to Large Object Heap fragmentation.
Edit: Since it looks like it is a problem with allocating and throwing away the Textures, can you use Texture2D.SetData to reuse the large byte[]s?
First, you need to figure out whether it is managed or unmanaged memory that is leaking.
Use perfmon to see what happens to your process '.net memory# Bytes in all Heaps' and Process\Private Bytes. Compare the numbers and the memory rises. If the rise in Private bytes outpaces the rise in heap memory, then it's unmanaged memory growth.
Unmanaged memory growth would point to objects that are not being disposed (but eventually collected when their finalizer executes).
If it's managed memory growth, then we'll need to see which generation/LOH (there are also performance counters for each generation of heap bytes).
If it's Large Object Heap bytes, you'll want to reconsider the use and throwing away of large byte arrays. Perhaps the byte arrays can be re-used instead of discarded. Also, consider allocating large byte arrays that are powers of 2. This way, when disposed, you'll leave a large "hole" in the large object heap that can be filled by another object of the same size.
A final concern is pinned memory, but I don't have any advice for you on this because I haven't ever messed with it.
I would also add that if you are doing any file access, make sure that you are closing and/or disposing of any Readers or Writers. You should have a matching 1-1 between opening any file and closing it.
Also, I usually use the using clause for resources, like a Sql Connection:
using (var connection = new SqlConnection())
{
// Do sql connection work in here.
}
Are you implementing IDisposable on any objects and possibly doing something custom that is causing any issues? I would double check all of your IDisposable code.
The GC doesn't take into account the unmanaged heap. If you are creating lots of objects that are merely wrappers in C# to larger unmanaged memory then your memory is being devoured but the GC can't make rational decisions based on this as it only see the managed heap.
You end up in a situation where the GC collector doesn't think you are short of memory because most of the things on your gen 1 heap are 8 byte references where in actual fact they are like icebergs at sea. Most of the memory is below!
You can make use of these GC calls:
System::GC::AddMemoryPressure(sizeOfField);
System::GC::RemoveMemoryPressure(sizeOfField);
These methods allow the garbage collector to see the unmanaged memory (if you provide it the right figures)
In C++ it is easily possible to have a permanent memory leak - just allocate memory and don't release it:
new char; //permanent memory leak guaranteed
and that memory stays allocated for the lifetime of the heap (usually the same as program runtime duration).
Is the same (a case that will lead to a specific unreferenced object never been released while memory management mechanisms are working properly) possible in a C# program?
I've carefully read this question and answers to it and it mentions some cases which lead to getting higher memory consumption than expected or IMO rather extreme cases like deadlocking the finalizer thread, but can a permanent leak be formed in a C# program with normally functioning memory management?
It depends on how you define a memory leak. In an unmanaged language, we typically think of a memory leak as a situation where memory has been allocated, and no references to it exist, so we are unable to free it.
That kind of leaks are pretty much impossible to create in .NET (unless you call out into unmanaged code, or unless there's a bug in the runtime).
However, you can get another "weaker" form of leaks: when a reference to the memory does exist (so it is still possible to find and reset the reference, allowing the GC to free the memory normally), but you thought it didn't, so you assumed the object being referenced would get GC'ed. That can easily lead to unbounded growth in memory consumption, as you're piling up references to objects that are no longer used, but which can't be garbage collected because they're still referenced somewhere in your app.
So what is typically considered a memory leak in .NET is simply a situation where you forgot that you have a reference to an object (for example because you failed to unsubscribe from an event). But the reference exists, and if you remember about it, you can clear it and the leak will go away.
You can write unmanaged code in .NET if you wish, you have enclose your block of code with unsafe keyword, so if you are writing unsafe code are you not back to the problem of managing memory by yourself and if not get a memory leak?
It's not exactly a memory leak, but if you're communicating with hardware drivers directly (i.e. not through a properly-written .net extension of a set of drivers) then it's fairly possible to put the hardware into a state where, although there may or may not be an actual memory leak in your code, you can no longer access the hardware without rebooting it or the PC...
Not sure if this is a useful answer to your question, but I felt it was worth mentioning.
GC usually delay the collection of unreachable memory to some later time when an analysis of the references show that the memory is unreachable. (In some restricted cases, the compiler may help the GC and warn it that a memory zone is unreachable when it become so.)
Depending on the GC algorithm, unreachable memory is detected as soon as a collection cycle is ran, or it may stay undetected for a certain number of collection cycles (generational GC show this behavior for instance). Some techniques even have blind spots which are never collected (use of reference counted pointer for instance) -- some deny them the name of GC algorithm, they are probably unsuitable in general purpose context.
Proving that a specific zone will be reclaimed will depend on the algorithm and on the memory allocation pattern. For simple algorithm like mark and sweep, it is easy to give a bound (says till the next collection cycle), for more complex algorithms the matter is more complex (under a scheme which use a dynamic number of generations, the conditions in which a full collection is done are not meaningful to someone not familiar with the detail of the algorithm and the precise heuristics used)
A simple answer is that classic memory leaks are impossible in GC environments, as classically a memory leak is leaked because, as an unreferenced block theres no way for the software to find it to clean it up.
On the other hand, a memory leak is any situation where the memory usage of a program has unbounded growth. This definition is useful when analyzing how software might fail when run as a service (where services are expected to run, perhaps for months at a time).
As such, any growable data structure that continues to hold onto references onto unneeded objects could cause service software to effectively fail because of address space exhaustion.
Easiest memory leak:
public static class StaticStuff
{
public static event Action SomeStaticEvent;
}
public class Listener
{
public Listener() {
StaticStuff.SomeStaticEvent+=DoSomething;
}
void DoSomething() {}
}
instances of Listener will never be collected.
If we define memory leak as a condition where a memory that can be used for creating objects, cannot be used or a memory that can be released does not then
Memory leaks can happen in:
Events in WPF where weak events need to be used. This especially can happens in Attached Properties.
Large objects
Large Object Heap Fragmentation
http://msdn.microsoft.com/en-us/magazine/cc534993.aspx