Dispose and Finalizer not called, OutOfMemoryException occurs - c#

I am trying to write a class that wraps a buffer allocated with Marshal.AllocHGlobal. I implemented the IDisposable interface, and added a finalizer that should release the memory when I don't need it anymore (when the object goes out of scope).
When I test the class, the GC does not call the finalizer or the Dispose method of my classes, even though they are out of scope. As a result, I get an OutOfMemoryException.
Why does the GC not call the finalizer, and why does the memory not get freed?
Here is a short example that illustrates the problem. In the sample, there is nothing written to the console (except Unhandled Exception: OutOfMemoryException.)
class Buffer : IDisposable
{
public IntPtr buf { get; set; }
public Buffer()
{
buf = Marshal.AllocHGlobal(4 * 1024 * 1024);
}
~Buffer()
{
Console.WriteLine("Finalizer called");
Dispose(false);
}
public void Dispose()
{
Console.WriteLine("Dispose called");
Dispose(true);
GC.SuppressFinalize(this);
}
internal virtual void Dispose(bool disposing)
{
if (buf != IntPtr.Zero)
{
Console.WriteLine("Releasing memory");
Marshal.FreeHGlobal(buf);
buf = IntPtr.Zero;
}
}
}
class Program
{
static void Main(string[] args)
{
while(true)
{
Buffer b = new Buffer();
Thread.Sleep(20);
}
}
}
EDIT: Here is the .NET performance counters for my test program when it crashes:

You need to tell the garbage collector that your very small managed objects with a single IntPtr field have a high cost in terms of unmanaged memory. Currently, the garbage collector is blissfully unaware of the fact that each small managed object uses a large amount of unmanaged memory and has no reason to perform any collection.
You can use the GC.AddMemoryPressure when you allocate the unmanaged memory and GC.RemoveMemoryPressure when you free the unmanaged memory.

Garbage collection occurs when one of the following conditions is true:
The system has low physical memory.
The memory that is used by allocated objects on the managed heap
surpasses an acceptable threshold. This threshold is continuously
adjusted as the process runs.
The GC.Collect method is called. In almost all cases, you do not
have to call this method, because the garbage collector runs
continuously. This method is primarily used for unique situations
and testing.
Also the garbage collector tracks memory only on the managed heap, so for this program, the only condition can trigger GC is the first one.
I compiled the program, if the target CPU is x86, it will through out of memory exception when the private bytes of the process reaches about 2G. When I run the program, I noticed the private bytes increase quickly, but the working set increase very slowly, and also the system physical memory usage increase very slowly.
As private bytes and working set, this post explains:
Private Bytes refer to the amount of memory that the process executable has asked for - not necessarily the amount it is actually using. They are "private" because they (usually) exclude memory-mapped files (i.e. shared DLLs). But - here's the catch - they don't necessarily exclude memory allocated by those files. There is no way to tell whether a change in private bytes was due to the executable itself, or due to a linked library. Private bytes are also not exclusively physical memory; they can be paged to disk or in the standby page list (i.e. no longer in use, but not paged yet either).
Working Set refers to the total physical memory (RAM) used by the process. However, unlike private bytes, this also includes memory-mapped files and various other resources, so it's an even less accurate measurement than the private bytes. This is the same value that gets reported in Task Manager's "Mem Usage" and has been the source of endless amounts of confusion in recent years. Memory in the Working Set is "physical" in the sense that it can be addressed without a page fault; however, the standby page list is also still physically in memory but not reported in the Working Set, and this is why you might see the "Mem Usage" suddenly drop when you minimize an application.
Marshal.AllocHGlobal just increases the private bytes, but the working set is still small, it doesn't trigger GC either.
Please refer this: Fundamentals of Garbage Collection

IDisposable is declarative, The dispose method is only called when Garbage collection actually happens.
Yoy can force garbage collection to happen , for this You need to call
GC.Collect
http://msdn.microsoft.com/en-us/library/xe0c2357(v=vs.110).aspx
I would also recommend using preformance counters to see you app memory consumtion, and see if GC is already called. see here how to do it http://msdn.microsoft.com/en-us/library/x2tyfybc(v=vs.110).aspx

Related

Memory leaks in C# while using C++/CLI defined class with finalizer

When I implement a class in C++/CLI DLL:
public ref class DummyClass
{
protected:
!DummyClass()
{
// some dummy code:
std::cout << "hello" << std::endl;
}
}
and when I load that DLL to C# project and use the class just by repeatedly creating an object:
static void Main()
{
while (true)
{
var obj = new DummyClass();
}
}
then, while running the program, memory is slowly digested to OutOfMemoryException.
It seems, that this memory leak (or bad work of garbage collection) happens everytime I implement finalizer in C++/CLI.
Why happens this memory leak? How could I avoid it and be still able to use finalizer for some other (more complicated) use?
UPDATE: The cause surely is not in writing to Console / stdout or other non-standard code in finalizer, the following class has the same memory leaking behaviour:
public ref class DummyClass
{
private:
double * ptr;
public:
DummyClass()
{
ptr = new double[5];
}
protected:
!DummyClass()
{
delete [] ptr;
}
}
When you allocate faster than you can garbage collect you will run into OOM. If you do heavy allocations the CLR will insert a Sleep(xx) to throttle allocation but this is not enough in your extreme case.
When you implement a finalizer your object is added to the finalization queue and when it was finalized it is removed from the queue. This does impose additional overhead and you will make your object life longer than necessary. Even if your object could be freed during a cheap Gen 0 GC it is still referenced by the finalization queue. When a full GC is happening the CLR does trigger the finalizaion thread to start cleaning up. This does not help since you do allocate faster than you can finalize (writing to stdout is very slow) and your finalization queue will become bigger and bigger leading to slower and slower finalization times.
I have not measured it but I think even an empty finalizer will cause this issue since the increased object lifetime and two finalization queue handling (finalizer queue and f-reachable queue) do impose enough overhead to make finalization slower than allocation.
You need to remember that finalization is an inherent asynchronous operation with no execution guarantees at a specific point of time. The CLR will never wait to clean all pending finalizers before allowing additional allocations. If you allocate on 10 threads there will still be one finalizer thread cleaning up after you. If you want to rely on deterministic finalization you will need to wait by calling GC.WaitForPendingFinalizers()
but this will bring your performance to a grinding halt.
Your OOM is therefore expected.
You should use the AddMemoryPressure function, otherwise the garbage collector underestimates the need for cleaning up these objects in a timely manner.

C# not releasing memory after task complete

The following code is a simplified example of an issue I am seeing. This application consumes approx 4GB of memory before throwing an exception as the dictionary is too big.
class Program
{
static void Main(string[] args)
{
Program program = new Program();
while(true)
{
program.Method();
Console.ReadLine();
}
}
public void Method()
{
WasteOfMemory memory = new WasteOfMemory();
Task tast = new Task(memory.WasteMemory);
tast.Start();
}
}
public class WasteOfMemory
{
public void WasteMemory()
{
Dictionary<string, string> aMassiveList = new Dictionary<string, string>();
try
{
long i = 0;
while (true)
{
aMassiveList.Add(i.ToString(), "I am a line of text designed to waste space.... I am exceptionally useful........");
i++;
}
}
catch(Exception e)
{
Console.WriteLine("I have broken myself");
}
}
}
This is all as expected, although what we cannot currently work out is when this memory should be released from the CLR.
We have let the task complete and then simulated a memory overload situation, but the memory consumed by the dictionary is not released. As the OS is running out of memory, is it not putting pressure on the CLR to release the memory?
However and even more confusing, if we wait until the task has completed, then hit enter to run the task again the memory is released, so obviously the previous dictionary has been garbage collected (hasn't it?).
So, why is the memory not being released? And how can we get the CLR to release the memory?
Any explanations or solutions would be greatly appreciated.
EDIT: Following replies, particularly Beska's, it is obvious my description of the issue is not the the clearest, so I will try to clarify.
The code may not be the best example, sorry! It was a quick crude piece of code to try to replicate the issue.
The dictionary is used here to replicate the fact we have a large custom data object, which fills a large chunk of our memory and it is not then released after the task has completed.
In the example, the dictionary fills up to the limit of the dictionary and then throws an exception, it does NOT keep filling forever! This is well before our memory is full, and it does not cause an OutOfMemoryException. Hence the result is a large object in memory, and then the task completes.
At this point we would expect the dictionary to be out of scope, as both the task and the method 'Method' have completed. Hence, we would expect the dictionary to be garbage collected and the memory reclaimed. In reality, the memory is not freed until 'Method' is called again, creating a new WasteOfMemory instance and starting a new task.
Hopefully that will clarify the issue a bit
The garbage collector only frees locations in memory that are no longer in use that are objects which have no pointer pointing to them.
(1) your program runs infinitely without termination and
(2) you never change the pointer to your dictionary, so the GC has certainly no reason to touch the dictionary.
So for me your program is doing exactly what it is supposed to do.
Okay, I've been following this...I think there are a couple issues, some of which people have touched on, but I think not answering the real question (which, admittedly, took me a while to recognize, and I'm not sure I'm answering what you want even now.)
This is all as expected, although what we cannot currently work out is when this memory should be released from the CLR.
As others have said, while the task is running, the dictionary will not be released. It's being used. It gets bigger until you run out of memory. I'm pretty sure you understand this.
We have let the task complete and then simulated a memory overload situation, but the memory consumed by the dictionary is not released. As the OS is running out of memory, is it not putting pressure on the CLR to release the memory?
Here, I think, is the real question.
If I understand you correctly, you're saying you set this up to fill up memory. And then, after it crashes (but before you hit return to start a new task) you're trying other things outside of this program, such as running other programs in Windows to try to get the GC to collect the memory, right? Hoping that the OS would talk to the GC, and start pressuring it to do it's thing.
However and even more confusing, if we wait until the task has completed, then hit enter to run the task again the memory is released, so obviously the previous dictionary has been garbage collected (hasn't it?).
I think you answered your own question...it has not been necessarily been released until you hit return to start a new task. The new task needs memory, so it goes to the GC, and the GC happily collects the memory from the previous task, which has now ended (after throwing from full memory).
So, why is the memory not being released? And how can we get the CLR to release the memory?
I don't know that you can force the GC to release memory. Generally speaking, it does it when it wants (though some hacker types might know some slick way to force its hand.) Of course, .NET decides when to run the GC, and since nothing is happening while the program is just sitting there, it may well be deciding that it doesn't need to. As to whether the OS can pressure the GC to run, it seems from your tests the answer is "no". A bit counter-intuitive perhaps.
Is that what you were trying to get at?
The memory is not being released because the scope aMassiveList is never finished. When a function returns, it releases all non-referenced resources created inside it.
In your case, aMassiveList never leaves context. If you want your function never to return you have to find a way to 'process' your info and release it instead of storing all of them forever.
If you create a function that increasingly allocates resources and never release it you will end up consuming all the memory.
GC will only release unreferenced objects, so as the dictionary is being referenced by your program it can't be released by the GC
The way you've written the WasteMemory method, it will never exit (unless the variable "i" overflows, which won't happen this year) and BECAUSE IT WILL NEVER EXIT it will keep IN USE the reference to the internal Dictionary.
Daniel White is right, you should read about how GC works.
If the references are in use, GC will not collect the referenced memory. Otherwise, how would any program work?
I don't see what you expect the CLR/GC to do here. There's nothing to garbage-collect inside one run of your WasteMemory method.
However and even more confusing, if we wait until the task has completed, then hit enter to run the task again the memory is released, so obviously the previous dictionary has been garbage collected (hasn't it?).
When you press Enter, a new task is created and started. It's not the same task, it's a new task - a new object holding a reference to a new WasteOfMemory instance.
The old task will keep running and the memory it uses will NOT be collected because the old task keeps running in background and it keeps USING that memory.
I'm not sure why - and most importantly HOW - you observe the memory of the old task being released.
Change your method to be a using statement
Example:
Using (WateOfMemory memory = new WateOfMemory())
{
Task tast = new Task(memory.WasteMemory);
tast.Start();
}
And add disposible WateOfMemoryClass (by the way your constructor is WasteOfMemory)
#region Dispose
private IntPtr handle;
private Component component = new Component();
private bool disposed = false;
public WateOfMemory()
{
}
public WateOfMemory(IntPtr handle)
{
this.handle = handle;
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
private void Dispose(bool disposing)
{
if(!this.disposed)
{
if(disposing)
{
component.Dispose();
}
CloseHandle(handle);
handle = IntPtr.Zero;
}
disposed = true;
}
[System.Runtime.InteropServices.DllImport("Kernel32")]
private extern static Boolean CloseHandle(IntPtr handle);
~WateOfMemory()
{
Dispose(false);
}
#endregion

working with unmanaged resources

I use a 3rd party library which is a wrapper around a native dll. the library contains a type XImage, XImage has some properties and a IntPtr Data() method. XImage also Implements IDisposable but I don't know if it is implemented correctly.
I get many XImages from TCP connection and show them as a movie in a PictureBox.
I used to convert 'XImage' to System.Drawing.Image and view them in a PictureBox but I got AccessViolationException.
So I made a wrapper around XImage called Frame.
public class Frame : IDisposable
{
public uint size { get; private set; }
private Image image;
public XImage XImage { get; set; }
public Image Image { get { return image ?? (image = GetBitmap(this.XImage)); } }
public DateTime Time { get; set; }
public Frame(XImage xImage)
{
this.XImage = xImage;
this.size = XImage.ImageBufferSize();
GC.AddMemoryPressure(size);
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
~Frame()
{
Dispose(false);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
try
{
image.Dispose();
}
catch { }
finally
{
image = null;
}
try
{
MImage.Dispose();
}
catch { }
finally { XImage = null; }
}
GC.RemoveMemoryPressure(size);
}
}
and by handling references to Frame I solved the AccessViolationException.
now I have another Issue, when I run the program from visual studio (F5 - Start Debugging) everything is okay, but when I run it from the .exe file or (ctrl + F5 - Start without debugging) the memory usage is growing larger and larger until I get OutOfMemoryException.(Biuld Configuration: Release - X86). what should I do ?
---- EDIT ----
I found that GC.AddMemoryPressure or GC.RemoveMemoryPressure just makes the garbage collection to run more often and, my problem is now that I have small objects that have a handle to large unmanaged memory, and GC is not collecting these small objects.
---- EDIT ----
calling GC.Collect will solve the problem during run-time, I set up a timer and call GC.Collect periodically, but it makes the application freeze for a short period, so I don't want to use this approach.
I have found that GC has limitations and may not work too well under very heavy pressure and for memory intense application. I have an application that does not do anything with unmanaged resources directly, all standard .NET components, and it can still choke on memory. It can use GBs of RAM but not because of enormous memory requirements but because big objects are created and destroyed relatively quickly and apparently not collected too often. Application has no memory leaks as it is all freed when collection is forced. It looks like GC is not always able to collect unused objects on time i.e. before OutOfMemoryException. It waits to find the best moment but before it makes up its mind it is too late. When I periodically force the collection, application runs with no problems.
It is worth mentioning that OutOfMemoryException does not always mean that you're actually out of free memory. It can also mean there is no big enough contiguous memory chunk available. This may be the case especially when working with video and images. GC might think there is still plenty of memory free but it is too fragmented for your application. I am sure GC takes fragmentation into account but it is possible that it's not always getting it right.
If you are sure the library is not the problem, my advise is to experiment more with memory pressure methods (AddMemoryPressure and RemoveMemoryPressure) to help GC do its thing on time. It may solve your problems since you work with an unmanaged library that may be handling significant amounts of memory behind GC's back. Alternatively do as you do with GC.Collect. Manual collection may not be ideal but I believe there are cases when it is justified. Of course do expect that manual collection of lots of objects may have an impact on your application performance.
EDIT
If manual collection introduces too much of a performance impact, try to use overloaded versions of GC.Collect that give you a bit more control.

C# MemoryStream leaking memory, after disposing/close/etc?

I've been tracking massive memory leeking in my application and it seems the issue is the MemoryStream class. Whenever I use one, either with the 'using' key word or explicit close/dispose, the memory will never be collected by the garbage collector. What is wrong here?
byte[] bData = System.IO.File.ReadAllBytes( "F:\\application_exit_bw.png" );
using( System.IO.MemoryStream hMemoryStreamOutput = new System.IO.MemoryStream())
{
for ( int i = 0; i < 10000; i++ ) hMemoryStreamOutput.Write( bData, 0, bData.Length );
}
Thread.Sleep(Timeout.Infinite);
With explicit close/dipose the behaviour stays the same. Memory is occupied and will stay that way until I close my application, or, the application filled all of the system memory. Help?
There is nothing wrong with the MemoryStream class or the usage within your sample code. The GC in .Net doesn't clean up memory immediately after it's no longer. Instead it reclaims it when free space in the heap reaches a certain threshold or it's explicitly invoked with a GC.Collect call.
In this scenario the only way the memory would be freed is if a GC occurred immediately after the using statement and before the Thread.Sleep call. This is fairly unlikely to happen and hence if you profiled the program it would have the appearance of a memory leak when it's not actually leaking
This is a symptom of a non-deterministic GC. The GC does not make any guarantees at all about when memory will be freed. This is normal, expected and wanted behavior.
Try calling GC.Collect() to see if this fixes your problem. Also, you need to run in release mode because in debug mode the JIT extends the lifetime of local variables to the end of the method even if they are not used after some point.
Another side of the problem is what you are using to determine "memory leak". There are many different ways to measure "free' memory and depending on it you may get tottaly different results.
Memory usage show in Task manager - unlikely to go down even when all memory is considered "free" by CLR GC due to the way GC uses memory.
GC memory performance counters (and properties) - these ones actually will show GC's view on memory. You want to use them to detect managed memory leaks.
There is one more thing with MemoryStream (and any other large 86K+) allocations - they use Large Objects Heap that will be only collected on full GC, to trigger it you may need to run GC.Collect twice. In normal flow of an application it will happen rare enough, so you may not see this memory freed before application shutdown. To diagnose - check GC collection performance counter (number of GC).
And one more: if you are trying to solve memory leak because you are getting "out of memory" exception it may be caused by address space fragmentation (normally only for 32-bit processes). If it is the case - consider creating your own memory stream that does not allocate memory in single chunk and then have to copy it when growing the stream. Or at least try to preallocate space in the stream.
I have used this method for batch processing
static byte[] buffer;
public static object Read(XmlDocument xmlDocument)
{
if (buffer == null)
{
buffer = new byte[1024 * 1024 * 512];
}
if (xmlDocument != null)
{
using (MemoryStream ms = new MemoryStream(buffer))
{
xmlDocument.Save(ms);
ms.Flush();
ms.Seek(0, SeekOrigin.Begin);
object result = ReadFromStream(ms);
ms.Close();
return result;
}
}
else
{
return null;
}
}
Calling GC.Collect() is not a good practise and shouldn't be used as a solution.
You can try and see if that changes anything but do not rely on intentional GC.Collect() call...

Why does calling AppDomain.Unload doesn't result in a garbage collection?

When I perform a AppDomain.Unload(myDomain) I expect it to also do a full garbage collection.
According to Jeffrey Richter in "CLR via C#" he says that during an AppDomain.Unload:
The CLR forces a garbage collection to occur, reclaiming the memory used by any objects
that were created by the now unloaded AppDomain. The Finalize methods for
these objects are called, giving the objects a chance to clean themselves up properly.
According to "Steven Pratschner" in "Customizing .NET Framework Common Language Runtime":
After all finalizers have run and no more threads are executing in the domain, the CLR is ready to unload all the in-memory data structures used in the internal implementation. Before this happens, however, the objects that resided in the domain must be collected. After the next garbage collection occurs, the application domain data structures are unloaded from the process address space and the domain is considered unloaded.
Am I misinterpreting their words?
I did the following solution to reproduce the unexpected behavior (in .net 2.0 sp2):
An class library project called "Interfaces" containing this interface:
public interface IXmlClass
{
void AllocateMemory(int size);
void Collect();
}
A class library project called "ClassLibrary1" which references "Interfaces" and contains this class:
public class XmlClass : MarshalByRefObject, IXmlClass
{
private byte[] b;
public void AllocateMemory(int size)
{
this.b = new byte[size];
}
public void Collect()
{
Console.WriteLine("Call explicit GC.Collect() in " + AppDomain.CurrentDomain.FriendlyName + " Collect() method");
GC.Collect();
Console.WriteLine("Number of collections: Gen0:{0} Gen1:{1} Gen2:{2}", GC.CollectionCount(0), GC.CollectionCount(1), GC.CollectionCount(2));
}
~XmlClass()
{
Console.WriteLine("Finalizing in AppDomain {0}", AppDomain.CurrentDomain.FriendlyName);
}
}
A console application project which references "Interfaces" project and does the following logic:
static void Main(string[] args)
{
AssemblyName an = AssemblyName.GetAssemblyName("ClassLibrary1.dll");
AppDomain appDomain2 = AppDomain.CreateDomain("MyDomain", null, AppDomain.CurrentDomain.SetupInformation);
IXmlClass c1 = (IXmlClass)appDomain2.CreateInstanceAndUnwrap(an.FullName, "ClassLibrary1.XmlClass");
Console.WriteLine("Loaded Domain {0}", appDomain2.FriendlyName);
int tenmb = 1024 * 10000;
c1.AllocateMemory(tenmb);
Console.WriteLine("Number of collections: Gen0:{0} Gen1:{1} Gen2:{2}", GC.CollectionCount(0), GC.CollectionCount(1), GC.CollectionCount(2));
c1.Collect();
Console.WriteLine("Unloaded Domain{0}", appDomain2.FriendlyName);
AppDomain.Unload(appDomain2);
Console.WriteLine("Number of collections after unloading appdomain: Gen0:{0} Gen1:{1} Gen2:{2}", GC.CollectionCount(0), GC.CollectionCount(1), GC.CollectionCount(2));
Console.WriteLine("Perform explicit GC.Collect() in Default Domain");
GC.Collect();
Console.WriteLine("Number of collections: Gen0:{0} Gen1:{1} Gen2:{2}", GC.CollectionCount(0), GC.CollectionCount(1), GC.CollectionCount(2));
Console.ReadKey();
}
The output when running the console application is:
Loaded Domain MyDomain
Number of collections: Gen0:0 Gen1:0 Gen2:0
Call explicit GC.Collect() in MyDomain Collect() method
Number of collections: Gen0:1 Gen1:1 Gen2:1
Unloaded Domain MyDomain
Finalizing in AppDomain MyDomain
Number of collections after unloading appdomain: Gen0:1 Gen1:1 Gen2:1
Perform explicit GC.Collect() in Default Domain
Number of collections: Gen0:2 Gen1:2 Gen2:2
Things to notice:
Garbage collection is done per process (just a refresher)
Objects in the appdomain that gets unloaded have the finalizer called but garbage collection is not done. The 10 megabyte object created by AllocateMemory() will only be collected after performing an explicit GC.Collect() in the above example (or if the garbage collector will at some time later.
Other notes: it doesn't really matter if XmlClass is finalizable or not. The same behavior occurs in the above example.
Questions:
Why does calling AppDomain.Unload doesn't result in a garbage collection? Is there any way to make that call result in a garbage collection?
Inside AllocateMemory() I plan to load short lived large xml documents (less or equal to 16 mb) that will get on LargeObject heap and will be generation 2 objects. Is there any way to have the memory collected without resorting to explicit GC.Collect() or other kind of explicit programmatic control of garbage collector?
Additional Notes:
After some mail exchange with Jeffrey Richter who was kind enough to have a look at the question:
OK, I read your post.
First, the array will not be GC’d until the XMLClass object is GC’d and it takes TWO GCs to collect this object because it contains a Finalize method.
Second, unloading an appdomain at least performs the marking phase of the GC since this is the only way to determine which objects are unreachable so that their Finalize methods can be called.
However, the compact part of the GC might or might not be done when unloading a GC.
Calling GC.CollectionCount obvious does not tell the whole story. It is not showing that the GC marking phase did occur.
And, it’s possible that AppDomain.Unload starts a GC via some internal code which does not cause the collection count variables to be incremented. We already know for a fact that the marking phase is being performed and that collection count is not reflecting this.
A better test would be to look at some object addresses in the debugger and see if compaction actually occurs. If it does (and I suspect it does), then the collection count is just not being updated correctly.
If you want to post this to the web site as my response, you can.
After taking his advice and looking into SOS (also removed the finalizer) it revealed this:
Before AppDomain.Unload:
!EEHeap -gc
Number of GC Heaps: 1
generation 0 starts at 0x0180b1f0
generation 1 starts at 0x017d100c
generation 2 starts at 0x017d1000
ephemeral segment allocation context: none
segment begin allocated size
017d0000 017d1000 01811ff4 0x00040ff4(266228)
Large object heap starts at 0x027d1000
segment begin allocated size
027d0000 027d1000 02f75470 0x007a4470(8012912)
Total Size 0x7e5464(8279140)
------------------------------
GC Heap Size 0x7e5464(8279140)
After AppDomain.Unload (same addresses, no heap compaction was done)
!EEHeap -gc
Number of GC Heaps: 1
generation 0 starts at 0x0180b1f0
generation 1 starts at 0x017d100c
generation 2 starts at 0x017d1000
ephemeral segment allocation context: none
segment begin allocated size
017d0000 017d1000 01811ff4 0x00040ff4(266228)
Large object heap starts at 0x027d1000
segment begin allocated size
027d0000 027d1000 02f75470 0x007a4470(8012912)
Total Size 0x7e5464(8279140)
------------------------------
GC Heap Size 0x7e5464(8279140)
After GC.Collect(), addresses differ indicating heap compaction was done.
!EEHeap -gc
Number of GC Heaps: 1
generation 0 starts at 0x01811234
generation 1 starts at 0x0180b1f0
generation 2 starts at 0x017d1000
ephemeral segment allocation context: none
segment begin allocated size
017d0000 017d1000 01811ff4 0x00040ff4(266228)
Large object heap starts at 0x027d1000
segment begin allocated size
027d0000 027d1000 027d3240 0x00002240(8768)
Total Size 0x43234(274996)
------------------------------
GC Heap Size 0x43234(274996)
After more sos the conclusion I've reached is that it is surely by design, and that heap compaction is not necessarily done. The only thing you can really be sure during an AppDomain unload is that objects will get to be marked as unreachable and will be collected during the next garbage collection (which like I said, it's not done exactly when you unload your application domain, unless there's a coincidence).
EDIT: I've also asked Maoni Stephens, who works directly in the GC team. You can read her response somewhere in the comments here. She confirms that it is by design.
Case closed :)
Probably by design, but I don't understand why you want this behaviour (explicit GC.Collect). As long as the finalizers are called, the objects are removed from the finalizer queue and are ready to be garbage collected if required (the gc thread will kick in when necessary).
You can probably use some nasty unmanaged allocation and some heavy interop, or code it in unmanaged c++ and then use a managed wrapper to access it through C#, but as long as you stay within the managed .Net world, no.
It is more wise to take a second look at your architecture instead of focusing on trying to play the role of the garbage collector.

Categories

Resources