Summary: C#/.NET is supposed to be garbage collected. C# has a destructor, used to clean resources. What happen when an object A is garbage collected the same line I try to clone one of its variable members? Apparently, on multiprocessors, sometimes, the garbage collector wins...
The problem
Today, on a training session on C#, the teacher showed us some code which contained a bug only when run on multiprocessors.
I'll summarize to say that sometimes, the compiler or the JIT screws up by calling the finalizer of a C# class object before returning from its called method.
The full code, given in Visual C++ 2005 documentation, will be posted as an "answer" to avoid making a very very large questions, but the essential are below:
The following class has a "Hash" property which will return a cloned copy of an internal array. At is construction, the first item of the array has a value of 2. In the destructor, its value is set to zero.
The point is: If you try to get the "Hash" property of "Example", you'll get a clean copy of the array, whose first item is still 2, as the object is being used (and as such, not being garbage collected/finalized):
public class Example
{
private int nValue;
public int N { get { return nValue; } }
// The Hash property is slower because it clones an array. When
// KeepAlive is not used, the finalizer sometimes runs before
// the Hash property value is read.
private byte[] hashValue;
public byte[] Hash { get { return (byte[])hashValue.Clone(); } }
public Example()
{
nValue = 2;
hashValue = new byte[20];
hashValue[0] = 2;
}
~Example()
{
nValue = 0;
if (hashValue != null)
{
Array.Clear(hashValue, 0, hashValue.Length);
}
}
}
But nothing is so simple...
The code using this class is wokring inside a thread, and of course, for the test, the app is heavily multithreaded:
public static void Main(string[] args)
{
Thread t = new Thread(new ThreadStart(ThreadProc));
t.Start();
t.Join();
}
private static void ThreadProc()
{
// running is a boolean which is always true until
// the user press ENTER
while (running) DoWork();
}
The DoWork static method is the code where the problem happens:
private static void DoWork()
{
Example ex = new Example();
byte[] res = ex.Hash; // [1]
// If the finalizer runs before the call to the Hash
// property completes, the hashValue array might be
// cleared before the property value is read. The
// following test detects that.
if (res[0] != 2)
{
// Oops... The finalizer of ex was launched before
// the Hash method/property completed
}
}
Once every 1,000,000 excutions of DoWork, apparently, the Garbage Collector does its magic, and tries to reclaim "ex", as it is not anymore referenced in the remaning code of the function, and this time, it is faster than the "Hash" get method. So what we have in the end is a clone of a zero-ed byte array, instead of having the right one (with the 1st item at 2).
My guess is that there is inlining of the code, which essentially replaces the line marked [1] in the DoWork function by something like:
// Supposed inlined processing
byte[] res2 = ex.Hash2;
// note that after this line, "ex" could be garbage collected,
// but not res2
byte[] res = (byte[])res2.Clone();
If we supposed Hash2 is a simple accessor coded like:
// Hash2 code:
public byte[] Hash2 { get { return (byte[])hashValue; } }
So, the question is: Is this supposed to work that way in C#/.NET, or could this be considered as a bug of either the compiler of the JIT?
edit
See Chris Brumme's and Chris Lyons' blogs for an explanation.
http://blogs.msdn.com/cbrumme/archive/2003/04/19/51365.aspx
http://blogs.msdn.com/clyon/archive/2004/09/21/232445.aspx
Everyone's answer was interesting, but I couldn't choose one better than the other. So I gave you all a +1...
Sorry
:-)
Edit 2
I was unable to reproduce the problem on Linux/Ubuntu/Mono, despite using the same code on the same conditions (multiple same executable running simultaneously, release mode, etc.)
It's simply a bug in your code: finalizers should not be accessing managed objects.
The only reason to implement a finalizer is to release unmanaged resources. And in this case, you should carefully implement the standard IDisposable pattern.
With this pattern, you implement a protected method "protected Dispose(bool disposing)". When this method is called from the finalizer, it cleans up unmanaged resources, but does not attempt to clean up managed resources.
In your example, you don't have any unmanaged resources, so should not be implementing a finalizer.
What you're seeing is perfectly natural.
You don't keep a reference to the object that owns the byte array, so that object (not the byte array) is actually free for the garbage collector to collect.
The garbage collector really can be that aggressive.
So if you call a method on your object, which returns a reference to an internal data structure, and the finalizer for your object mess up that data structure, you need to keep a live reference to the object as well.
The garbage collector sees that the ex variable isn't used in that method any more, so it can, and as you notice, will garbage collect it under the right circumstances (ie. timing and need).
The correct way to do this is to call GC.KeepAlive on ex, so add this line of code to the bottom of your method, and all should be well:
GC.KeepAlive(ex);
I learned about this aggressive behavior by reading the book Applied .NET Framework Programming by Jeffrey Richter.
this looks like a race condition between your work thread and the GC thread(s); to avoid it, i think there are two options:
(1) change your if statement to use ex.Hash[0] instead of res, so that ex cannot be GC'd prematurely, or
(2) lock ex for the duration of the call to Hash
that's a pretty spiffy example - was the teacher's point that there may be a bug in the JIT compiler that only manifests on multicore systems, or that this kind of coding can have subtle race conditions with garbage collection?
I think what you are seeing is reasonable behavior due to the fact that things are running on multiple threads. This is the reason for the GC.KeepAlive() method, which should be used in this case to tell the GC that the object is still being used and that it isn't a candidate for cleanup.
Looking at the DoWork function in your "full code" response, the problem is that immediately after this line of code:
byte[] res = ex.Hash;
the function no longer makes any references to the ex object, so it becomes eligible for garbage collection at that point. Adding the call to GC.KeepAlive would prevent this from happening.
Yes, this is an issue that has come up before.
Its even more fun in that you need to run release for this to happen and you end up stratching your head going 'huh, how can that be null?'.
Interesting comment from Chris Brumme's blog
http://blogs.msdn.com/cbrumme/archive/2003/04/19/51365.aspx
class C {<br>
IntPtr _handle;
Static void OperateOnHandle(IntPtr h) { ... }
void m() {
OperateOnHandle(_handle);
...
}
...
}
class Other {
void work() {
if (something) {
C aC = new C();
aC.m();
... // most guess here
} else {
...
}
}
}
So we can’t say how long ‘aC’ might live in the above code. The JIT might report the reference until Other.work() completes. It might inline Other.work() into some other method, and report aC even longer. Even if you add “aC = null;” after your usage of it, the JIT is free to consider this assignment to be dead code and eliminate it. Regardless of when the JIT stops reporting the reference, the GC might not get around to collecting it for some time.
It’s more interesting to worry about the earliest point that aC could be collected. If you are like most people, you’ll guess that the soonest aC becomes eligible for collection is at the closing brace of Other.work()’s “if” clause, where I’ve added the comment. In fact, braces don’t exist in the IL. They are a syntactic contract between you and your language compiler. Other.work() is free to stop reporting aC as soon as it has initiated the call to aC.m().
That's perfectly nornal for the finalizer to be called in your do work method as after the
ex.Hash call, the CLR knows that the ex instance won't be needed anymore...
Now, if you want to keep the instance alive do this:
private static void DoWork()
{
Example ex = new Example();
byte[] res = ex.Hash; // [1]
// If the finalizer runs before the call to the Hash
// property completes, the hashValue array might be
// cleared before the property value is read. The
// following test detects that.
if (res[0] != 2) // NOTE
{
// Oops... The finalizer of ex was launched before
// the Hash method/property completed
}
GC.KeepAlive(ex); // keep our instance alive in case we need it.. uh.. we don't
}
GC.KeepAlive does... nothing :) it's an empty not inlinable /jittable method whose only purpose is to trick the GC into thinking the object will be used after this.
WARNING: Your example is perfectly valid if the DoWork method were a managed C++ method... You DO have to manually keep the managed instances alive manually if you don't want the destructor to be called from within another thread. IE. you pass a reference to a managed object who is going to delete a blob of unmanaged memory when finalized, and the method is using this same blob. If you don't hold the instance alive, you're going to have a race condition between the GC and your method's thread.
And this will end up in tears. And managed heap corruption...
The Full Code
You'll find below the full code, copy/pasted from a Visual C++ 2008 .cs file. As I'm now on Linux, and without any Mono compiler or knowledge about its use, there's no way I can do tests now. Still, a couple of hours ago, I saw this code work and its bug:
using System;
using System.Threading;
public class Example
{
private int nValue;
public int N { get { return nValue; } }
// The Hash property is slower because it clones an array. When
// KeepAlive is not used, the finalizer sometimes runs before
// the Hash property value is read.
private byte[] hashValue;
public byte[] Hash { get { return (byte[])hashValue.Clone(); } }
public byte[] Hash2 { get { return (byte[])hashValue; } }
public int returnNothing() { return 25; }
public Example()
{
nValue = 2;
hashValue = new byte[20];
hashValue[0] = 2;
}
~Example()
{
nValue = 0;
if (hashValue != null)
{
Array.Clear(hashValue, 0, hashValue.Length);
}
}
}
public class Test
{
private static int totalCount = 0;
private static int finalizerFirstCount = 0;
// This variable controls the thread that runs the demo.
private static bool running = true;
// In order to demonstrate the finalizer running first, the
// DoWork method must create an Example object and invoke its
// Hash property. If there are no other calls to members of
// the Example object in DoWork, garbage collection reclaims
// the Example object aggressively. Sometimes this means that
// the finalizer runs before the call to the Hash property
// completes.
private static void DoWork()
{
totalCount++;
// Create an Example object and save the value of the
// Hash property. There are no more calls to members of
// the object in the DoWork method, so it is available
// for aggressive garbage collection.
Example ex = new Example();
// Normal processing
byte[] res = ex.Hash;
// Supposed inlined processing
//byte[] res2 = ex.Hash2;
//byte[] res = (byte[])res2.Clone();
// successful try to keep reference alive
//ex.returnNothing();
// Failed try to keep reference alive
//ex = null;
// If the finalizer runs before the call to the Hash
// property completes, the hashValue array might be
// cleared before the property value is read. The
// following test detects that.
if (res[0] != 2)
{
finalizerFirstCount++;
Console.WriteLine("The finalizer ran first at {0} iterations.", totalCount);
}
//GC.KeepAlive(ex);
}
public static void Main(string[] args)
{
Console.WriteLine("Test:");
// Create a thread to run the test.
Thread t = new Thread(new ThreadStart(ThreadProc));
t.Start();
// The thread runs until Enter is pressed.
Console.WriteLine("Press Enter to stop the program.");
Console.ReadLine();
running = false;
// Wait for the thread to end.
t.Join();
Console.WriteLine("{0} iterations total; the finalizer ran first {1} times.", totalCount, finalizerFirstCount);
}
private static void ThreadProc()
{
while (running) DoWork();
}
}
For those interested, I can send the zipped project through email.
Related
I want to start some new threads each for one repeating operation. But when such an operation is already in progress, I want to discard the current task. In my scenario I need very current data only - dropped data is not an issue.
In the MSDN I found the Mutex class but as I understand it, it waits for its turn, blocking the current thread. Also I want to ask you: Does something exist in the .NET framework already, that does the following:
Is some method M already being executed?
If so, return (and let me increase some counter for statistics)
If not, start method M in a new thread
The lock(someObject) statement, which you may have come across, is syntactic sugar around Monitor.Enter and Monitor.Exit.
However, if you use the monitor in this more verbose way, you can also use Monitor.TryEnter which allows you to check if you'll be able to get the lock - hence checking if someone else already has it and is executing code.
So instead of this:
var lockObject = new object();
lock(lockObject)
{
// do some stuff
}
try this (option 1):
int _alreadyBeingExecutedCounter;
var lockObject = new object();
if (Monitor.TryEnter(lockObject))
{
// you'll only end up here if you got the lock when you tried to get it - otherwise you'll never execute this code.
// do some stuff
//call exit to release the lock
Monitor.Exit(lockObject);
}
else
{
// didn't get the lock - someone else was executing the code above - so I don't need to do any work!
Interlocked.Increment(ref _alreadyBeingExecutedCounter);
}
(you'll probably want to put a try..finally in there to ensure the lock is released)
or dispense with the explicit lock althogether and do this
(option 2)
private int _inUseCount;
public void MyMethod()
{
if (Interlocked.Increment(ref _inUseCount) == 1)
{
// do dome stuff
}
Interlocked.Decrement(ref _inUseCount);
}
[Edit: in response to your question about this]
No - don't use this to lock on. Create a privately scoped object to act as your lock.
Otherwise you have this potential problem:
public class MyClassWithLockInside
{
public void MethodThatTakesLock()
{
lock(this)
{
// do some work
}
}
}
public class Consumer
{
private static MyClassWithLockInside _instance = new MyClassWithLockInside();
public void ThreadACallsThis()
{
lock(_instance)
{
// Having taken a lock on our instance of MyClassWithLockInside,
// do something long running
Thread.Sleep(6000);
}
}
public void ThreadBCallsThis()
{
// If thread B calls this while thread A is still inside the lock above,
// this method will block as it tries to get a lock on the same object
// ["this" inside the class = _instance outside]
_instance.MethodThatTakesLock();
}
}
In the above example, some external code has managed to disrupt the internal locking of our class just by taking out a lock on something that was externally accessible.
Much better to create a private object that you control, and that no-one outside your class has access to, to avoid these sort of problems; this includes not using this or the type itself typeof(MyClassWithLockInside) for locking.
One option would be to work with a reentrancy sentinel:
You could define an int field (initialize with 0) and update it via Interlocked.Increment on entering the method and only proceed if it is 1. At the end just do a Interlocked.Decrement.
Another option:
From your description it seems that you have a Producer-Consumer-Scenario...
For this case it might be helpful to use something like BlockingCollection as it is thread-safe and mostly lock-free...
Another option would be to use ConcurrentQueue or ConcurrentStack...
You might find some useful information on the following site (the PDf is also downlaodable - recently downloaded it myself). The Adavnced threading Suspend and Resume or Aborting chapters maybe what you are inetrested in.
You should use Interlocked class atomic operations - for best performance - since you won't actually use system-level sychronizations(any "standard" primitive needs it, and involve system call overhead).
//simple non-reentrant mutex without ownership, easy to remake to support //these features(just set owner after acquiring lock(compare Thread reference with Thread.CurrentThread for example), and check for matching identity, add counter for reentrancy)
//can't use bool because it's not supported by CompareExchange
private int lock;
public bool TryLock()
{
//if (Interlocked.Increment(ref _inUseCount) == 1)
//that kind of code is buggy - since counter can change between increment return and
//condition check - increment is atomic, this if - isn't.
//Use CompareExchange instead
//checks if 0 then changes to 1 atomically, returns original value
//return true if thread succesfully occupied lock
return CompareExchange(ref lock, 1, 0)==0;
return false;
}
public bool Release()
{
//returns true if lock was occupied; false if it was free already
return CompareExchange(ref lock, 0, 1)==1;
}
I'm using native library via PInvoke calls which returns Byte* and want to make sure that in a producer/consumer scenario that the consumer thread gets the latest data.
I have a very contrived example here, to try to illustrate what I'm asking. I'm aware of the 'hot loop' in the Consumer (it's just an example, the real code is much larger and would not be feasible to paste here).
public unsafe class ThreadExample {
class PInvokeResult {
public Byte* Data;
public Int32 Length;
}
// Shared Objects Used For Synchronization
Object SyncRoot = new Object();
Queue<PInvokeResult> WorkQueue = new Queue<PInvokeResult>();
// Producer Thread
void Producer() {
while (true) {
PInvokeResult workItem;
workItem = new PInvokeResult();
workItem.Data = PInvokeNativeLongRunningCall(out workItem.Length);
lock (SyncRoot) {
WorkQueue.Enqueue(workItem);
}
}
}
// Consume Thread
void Consumer() {
while (true) {
bool workAvailable = false;
PInvokeResult workItem = null;
lock (SyncRoot) {
if (WorkQueue.Count > 0) {
workItem = WorkQueue.Dequeue();
workAvailable = true;
}
}
if (workAvailable) {
ProcessWorkItem(workItem);
PInvokeReturnPointerBuffer(workItem.Data);
}
}
}
}
Is the lock here enough to make sure that the data that is read from the Byte* pointer on PInvokeResult never points to 'stale' data for the consumer?
What I mean by stale data here is this case:
One specific Byte* buffer gets returned from the PInvokeNativeLongRunningCall invocation.
The buffer is passed from the producer to the consumer, this uses locking on the SyncRoot to make sure access to the Queue is safe.
The consumer does some work on the item and then returns it to the native code via PInvokeReturnPointerBuffer
The buffer is then recycled and re-used on the native side and the data in the buffer is first set to all zeroes and then written to again.
The cycle then starts over from 1.
When the buffer for the second times comes to the Consumer, how can I be sure that the data that it sees it the latest new data that was written to by the PInvoke call?
This question is purely from the C# perspective, I know that the native code is fine and solid, it's a well used library.
Is this even something that a language has to account for, or is this completely handled by the CPU itself?
I've read this article and this too.
I was trying to implement destructor in simple code.
class Program
{
static void Main(string[] args)
{
CreateSubscriber();
Console.Read();
}
static void CreateSubscriber()
{
Subscriber s = new Subscriber();
s.list.Clear();
}
}
public class Subscriber
{
public List<string> list = new List<string>();
public Subscriber()
{
for(long i = 0; i < 1000000; i++)
{
list.Add(i.ToString());
}
}
~Subscriber()
{
//this line is only performed on explicit GC.Collect()
Console.WriteLine("Destructor Called - Sub");
}
}
As when code reached line of Console.Read(), instance of Subscriber was no longer in scope (I was expecting it to be eligible for Garbage collection). I left above code running for almost 2 hours waiting for destructor of Subscriber. but that never called, neither memory taken by the code released.
I understood, in c# we cannot call destructors programmatic and it is automatically called on Garbage collection , So I tried to call GC.Collect() explicitly.
By doing that, I could see that destructor was called. So In my above code, Garbage collection was not being done! But Why?
Is it because, program is single threaded and that thread is waiting for user input on Console.Read() ?
Or it does have something with List of string ? if so what is it
Update (for future readers)
as Fabjan suggested in his answer
Most likely somewhere when a new object is created and memory is allocated for it, GC does perform a check of all references and collects the first object.
and suggested to try
CreateSubscriber();
Console.Read();
CreateSubscriber();
Console.Readkey();
I updated the code like below,
class Program
{
static void Main(string[] args)
{
CreateSubscriber(true);
Console.ReadLine();
CreateSubscriber(false);
Console.ReadLine();
}
static void CreateSubscriber(bool toDo)
{
Subscriber s = new Subscriber(toDo);
s.list.Clear();
}
}
public class Subscriber
{
public List<string> list = new List<string>();
public Subscriber(bool toDo)
{
Console.WriteLine("In Consutructor");
if (toDo)
{
for (long i = 0; i < 5000000; i++)
list.Add(i.ToString());
}
else
{
for (long i = 0; i < 2000000; i++)
list.Add(i.ToString());
}
Console.WriteLine("Out Consutructor");
}
~Subscriber()
{
Console.WriteLine("Destructor Called - Sub");
}
}
output:
and as he expected, on second instance creation of Subscriber, i could see GC being collected (finalizer being called).
Note that: in else condition of constructor of Subscriber, i m adding less items in list then if condition - to notice if RAM usage of application is being decreased accordingly, Yes it is being decreased too.
there in else condition, i could have left list of string empty (so memory usage will be significantly decreased). But doing so, GC is not being collected. Most likely because of the reason which M.Aroosi has mentioned in question's comment.
in addition to what's said above, the GC will only collect once a generation gets full(or due to an explicit call), and just 1 object created wouldn't trigger it. Yes the object is elligible for finalization, but there's no reason for the GC to collect it.
As when code reached line of Console.Read(), instance of Subscriber
was no longer in scope (I was expecting it to be eligible for Garbage
collection).
When GC detects that the reference for the instance of Subscriber is lost (out of scope) it will mark this object to be collected on one of the next rounds.
But only GC knows when exactly will this next round be.
Is it because, program is single threaded and that thread is waiting
for user input on Console.Read() ?
No, if we run this code in a separate thread the result will be the same.
However if we change this:
CreateSubscriber();
Console.Read();
To:
CreateSubscriber();
Console.Read();
CreateSubscriber();
Console.Readkey();
We could see that GC will collect the garbage and run the finalizer after Console.Read(). Why?
Most likely somewhere when a new object is created and memory is allocated for it, GC does perform a check of all references and collects the first object.
Let's summarize it a bit:
When we only create an object and there is no reference in code that points
to this obj or its class until program ends - GC allows program to end and
collect the garbage before exiting.
When we create an object and there is some sort of reference to obj
or its class - GC does performs a check and collects the garbage.
There's some complex logic behind how and when does the GC run collect and how and when the lifetime of the obj ends.
A quote from Eric Lippert's answer:
The lifetime may be extended by a variety of mechanisms, including
capturing outer variables by a lambda, iterator blocks, asynchronous
methods, and so on
It's very rare that we need to execute some code on object's destruction. In that specific scenario instead of guessing when the obj will be destroyed we could run GC.Collect explicitly.
More often though we might need to free some managed resources and for that we could use IDisposable interface and using statement that will automatically call Dispose before the control flow leaves the block of code (it'll create a try {} finally {} clause and in finally it will call Dispose for us).
using(myDisposable)
{
...
} // dispose is called here
I am implementing an infinite task that runs every so often via a .net console application. However I am concerned that the below will result in a memory leak. Since Main is static (this is where my knowledge on garbage collection gets foggy) does it not mean that the object that I create within the try and catch won't be picked up by the garbage collector until Main finishes (which is never)?
Could someone please explain how the garbage collector would behave in this case?
public static void Main(string[] args)
{
Log.Logger = new LoggerConfiguration()
.MinimumLevel.Debug()
.WriteTo.RollingFile("Logs/log-{Date}.txt")
.CreateLogger();
while (true)
{
try
{
Thread.Sleep(1000);
new UpdatePlayer().Run();
}
catch (Exception ex)
{
Log.Error(ex.ToString());
}
}
}
You'll have no memory leak: Main doesn't have any reference to UpdatePlayer() instance:
...
try
{
Thread.Sleep(1000);
// Instance created, executed
new UpdatePlayer().Run();
}
// And can be safely collected here when the instance is out of scope (`try {...}`)
// the anonymous instance can't be accessed and that's enough
A sample with memory leak:
// please notice, that anchor is outside the `while(true)` scope
List<Object> anchor = new List<Object>();
...
while (true)
{
try
{
Thread.Sleep(1000);
// Instance created
var item = new UpdatePlayer();
// anchored
anchor.Add(item);
// and executed
item.Run();
}
// item can't be collected here: anchor has a reference to it
// And the item potentially can be accessed by, say, anchor[0]
Edit: and if you move the collection into the while(true) scope, the code will be without memory leak:
try
{
List<object> pseudoAnchor = new List<object>();
// Instance created
var item = new UpdatePlayer();
// tried to be anchored (illusion)
pseudoAnchor.Add(item);
// and executed
item.Run();
}
...
// the only reference to item is pseudoAnchor, but
// pseudoAnchor has not been referenced, and so GC
// can safely collect entire pseudoAnchor with all its items
// (just one `item` in fact)
Since Main is static (this is where my knowledge on garbage collection gets foggy) does it not mean that the object that I create within the try and catch won't be picked up by the garbage collector until Main finishes (which is never)?
No, that is false.
The Garbage Collector is able to free the memory of any object it can prove can no longer be accessed from managed code. In this case, it's clear that after the Run method ends (unless you store a reference to that object somewhere else) that object will no longer be accessible, and so the GC is allowed to free it (it may take some time to do so, but it's allowed to).
Suppose an object has a Finalize() method.
When it is first created, a pointer in the finalization queue was added.
The object has no references .
When a garbage collection occurs, it moves the reference from the finalization queue to the f-reachable queue and a thread is started to run the Finalize method (sequentially after other objects' Finalize methods).
So the object now (after resurrection) has only one root which is the pointer from the f-reachable queue.
At this point, does/did the object get promoted to the next generation ?
This is something you can just try. Run this code in the Release build without a debugger attached:
using System;
class Program {
static void Main(string[] args) {
var obj = new Foo();
// There are 3 generations, could have used GC.MaxGeneration
for (int ix = 0; ix < 3; ++ix) {
GC.Collect();
GC.WaitForPendingFinalizers();
}
Console.ReadLine();
}
}
class Foo {
int attempt = 0;
~Foo() {
// Don't try to do this over and over again, rough at program exit
if (attempt++ < 3) {
GC.ReRegisterForFinalize(this);
Console.WriteLine(GC.GetGeneration(this));
}
}
}
Output:
1
2
2
So it stays in the generation it got moved to by the collection, moving to the next one on each collection until it hits the last one. Which makes sense in general.
It seems like the answer is yes, this will happen. http://msdn.microsoft.com/en-us/magazine/bb985010.aspx says:
... Two GCs are required to reclaim memory used by objects that require finalization. In reality, more than two collections may be necessary since the objects could get promoted to an older generation.