C# interface to COM leaks memory... strangely - c#

I have a C# program that accesses a COM Interface to a piece of simulation software called Aspen Plus. I have a very strange memory leak.
When I need to get the result values out of the simulation, I run a series of calls like this, in some cases the variable returned might be null, so I insert a check for that. Then I use FinalReleaseComObject to clean up the COM references.
public override ValueType recvValueFromSim<ValueType>(string path) {
Happ.IHNode tree = this.Aspen.Tree;
dynamic node = tree.FindNode(path);
ValueType retVal = default(ValueType);
if (node != null && node.Value != null) {
retVal = node.Value;
}
Marshal.FinalReleaseComObject(node);
Marshal.FinalReleaseComObject(tree);
node = null;
return retVal;
}
Unfortunately, the above code leaks a lot. It leaks 2MB per simulation. At first I thought the Garbage Collector would eventually run and clean it up, but no dice. After running a couple of hundred simulations, I ran out of memory.
The bizarre thing is, the below code works fine and doesn't leak. I didn't like it, because using catch to check for null references seems like bad form, but this doesn't leak.
public override ValueType recvValueFromSim<ValueType>(string path) {
ValueType node;
try {
node = this.Aspen.Tree.FindNode(path).Value;
return node;
} catch {
return default(ValueType);
}
}
Why doesn't it leak? Does anybody know? The belies why I thought I knew about temporary references and releasing COM objects.

Related

Disposing of all members in a loop

I have a very large project with multiple pages, where each page has many IDisposable members.
I'm trying to figure out a way to dispose all the IDisposable members in a loop so I won't have to type x1.Dispose(); x2.Dispose; ... xn.Dispose on each class.
Is there a way to do this?
Thank you.
Sure, just make sure that you create a list to hold them, and a try finally block to protect yourself from leaking them.
// List for holding your disposable types
var connectionList = new List<IDisposable>();
try
{
// Instantiate your page states, this may be need to be done at a high level
// These additions are over simplified, as there will be nested calls
// building this list, in other words these will more than likely take place in methods
connectionList.Add(x1);
connectionList.Add(x2);
connectionList.Add(x3);
}
finally
{
foreach(IDisposable disposable in connectionList)
{
try
{
disposable.Dispose();
}
catch(Exception Ex)
{
// Log any error? This must be caught in order to prevent
// leaking the disposable resources in the rest of the list
}
}
}
However, this approach is not always ideal. The nature of the nested calls will get complicated and require the call to be so far up in the architecture of your program that you may want to consider just locally handling these resources.
Moreover, this approach critically fails in the scenario where these Disposable resources are intensive and need to be immediately released. While you can do this, i.e. track your Disposable elements and then do it all at once, it is best to try to get the object lifetime as short as possible for managed resources like this.
Whatever you do, ensure not to leak the Disposable resource. If these are connection threads, and they are inactive for some period of time, it may also be wise to simply look at their state and then re-use them in different places instead of letting them hang around.
Using reflection (not tested):
public static void DisposeAllMembersWithReflection(object target)
{
if (target == null) return;
// get all fields, you can change it to GetProperties() or GetMembers()
var fields = target.GetType().GetFields(System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.NonPublic);
// get all fields that implement IDisposable
var disposables = fields.Where(x => x.FieldType.GetInterfaces().Contains(typeof(IDisposable)));
foreach (var disposableField in disposables)
{
var value = (IDisposable)disposableField.GetValue(target);
if (value != null)
value.Dispose();
}
}
Create method which will dispose all your disposable objects:
public void DisposeAll()
{
x1.Dispose();
x2.Dispose();
x3.Dispose();
. . .
}
and call it wherever you need it.

Do I absolutely need to call ReleaseComObject on every MSHTML object?

I'm using MSHTML with a WebBrowser control because it gives me access to things the WebBrowser doesn't such as text nodes. I've seen several posts here and on the web where people say you must call ReleaseComObject for every COM object you reference. So, say I do this:
var doc = myBrowser.Document.DomDocument as IHTMLDocument2;
Do I need to release doc? How body in this code:
var body = (myBrowser.Document.DomDocument as IHTMLDocument2).body;
Aren't these objects wrapped by a RCW that would release them as soon as there are no more references to them? If not, would it be a good idea to create a wrapper for each of them with a finalizer (instead of using Dispose) that would release them as soon as the garbage collector kicks in (such that I don't need to worry about manually disposing them)?
The thing is, my application has a memory leak and I believe is related to this. According to ANTS memory profiler, one of the functions (among many others that happen to use MSHTML objects) that is holding a reference to a bunch of Microsoft.CSharp.RuntimeBinder.Semantics.LocalVariableSymbol objects which are on the top list of objects using memory in Generation 2 is this one:
internal static string GetAttribute(this IHTMLDOMNode element, string name)
{
var attribute = element.IsHTMLElement() ? ((IHTMLElement)element).getAttribute(name) : null;
if (attribute != null) return attribute.ToString();
return "";
}
Not sure what's wrong here since attribute is just a string.
Here is another function that is shown on the ANTS profiler's Instance Retention Graph (I added a bunch of FinalReleaseComObjects but is still shown):
private void InjectFunction(IHTMLDocument2 document)
{
if (null == Document) throw new Exception("Cannot access current document's HTML or document is not an HTML.");
try
{
IHTMLDocument3 doc3 = document as IHTMLDocument3;
IHTMLElementCollection collection = doc3.getElementsByTagName("head");
IHTMLDOMNode head = collection.item(0);
IHTMLElement scriptElement = document.createElement("script");
IHTMLScriptElement script = (IHTMLScriptElement)scriptElement;
IHTMLDOMNode scriptNode = (IHTMLDOMNode)scriptElement;
script.text = CurrentFuncs;
head.AppendChild(scriptNode);
if (Document.InvokeScript(CurrentTestFuncName) == null) throw new Exception("Cannot inject Javascript code right now.");
Marshal.FinalReleaseComObject(scriptNode);
Marshal.FinalReleaseComObject(script);
Marshal.FinalReleaseComObject(scriptElement);
Marshal.FinalReleaseComObject(head);
Marshal.FinalReleaseComObject(collection);
//Marshal.FinalReleaseComObject(doc3);
}
catch (Exception ex)
{
throw ex;
}
}
I added the ReleaseComObject but the function seems to still be holding a reference to something. Here is how my function looks like now:
private void InjectFunction(IHTMLDocument2 document)
{
if (null == Document) throw new Exception("Cannot access current document's HTML or document is not an HTML.");
try
{
IHTMLDocument3 doc3 = document as IHTMLDocument3;
IHTMLElementCollection collection = doc3.getElementsByTagName("head");
IHTMLDOMNode head = collection.item(0);
IHTMLElement scriptElement = document.createElement("script");
IHTMLScriptElement script = (IHTMLScriptElement)scriptElement;
IHTMLDOMNode scriptNode = (IHTMLDOMNode)scriptElement;
script.text = CurrentFuncs;
head.AppendChild(scriptNode);
if (Document.InvokeScript(CurrentTestFuncName) == null) throw new Exception("Cannot inject Javascript code right now.");
Marshal.FinalReleaseComObject(scriptNode);
Marshal.FinalReleaseComObject(script);
Marshal.FinalReleaseComObject(scriptElement);
Marshal.FinalReleaseComObject(head);
Marshal.FinalReleaseComObject(collection);
Marshal.ReleaseComObject(doc3);
}
catch (Exception ex)
{
MessageBox.Show("Couldn't release!");
throw ex;
}
}
The MessageBox.Show("Couldn't release!"); line is never hit so I assume everything is been released properly. Here is what ANTS shows:
I have no idea what that site container thing is.
The RCW will release the COM object when the RCW is finalized, so you don't need to create a wrapper that does this. You call ReleaseComObject because you don't want to wait around for the finalization; this is the same rationale for the Dispose pattern. So creating wrappers that can be Disposed isn't a bad idea (and there are examples out there
For var doc = myBrowser.Document.DomDocument ...;, you should also capture .Document in a separate variable and ReleaseComObject it as well. Any time you reference a property of a COM object which produces another object, make sure to release it.
In GetAttribute, you're casting the element to another interface. In COM programming, that adds another reference. You'll need to do something like var htmlElement = (IHTMLElement) element; so you can release that as well.
Edit - this is the pattern to use when working with COM objects:
IHTMLElement element = null;
try
{
element = <some method or property returning a COM object>;
// do something with element
}
catch (Exception ex) // although the exception type should be as specific as possible
{
// log, whatever
throw; // not "throw ex;" - that makes the call stack think the exception originated right here
}
finally
{
if (element != null)
{
Marshal.ReleaseComObject(element);
element = null;
}
}
This should really be done for every COM object reference you have.
Probably this article brings in some light:
MSDN on how COM refcounting works and some basic rules when to call AddRef and Release
In your case, Release is ReleaseComObject

Trying to understand Microsoft's implementation of WeakReference

As a seasoned C++ programmer trying to get accustomed to .NET, there's an implementation detail in Microsoft's WeakReference "Target" property that's bugging me...
public class WeakReference : ISerializable
{
internal IntPtr m_handle;
internal bool m_IsLongReference;
...
public virtual object Target
{
[SecuritySafeCritical]
get
{
IntPtr handle = this.m_handle;
if (IntPtr.Zero == handle)
{
return null;
}
object result = GCHandle.InternalGet(handle);
if (!(this.m_handle == IntPtr.Zero))
{
return result;
}
return null;
}
[SecuritySafeCritical]
set
{
IntPtr handle = this.m_handle;
if (handle == IntPtr.Zero)
{
throw new InvalidOperationException(Environment.GetResourceString("InvalidOperation_HandleIsNotInitialized"));
}
object oldValue = GCHandle.InternalGet(handle);
handle = this.m_handle;
if (handle == IntPtr.Zero)
{
throw new InvalidOperationException(Environment.GetResourceString("InvalidOperation_HandleIsNotInitialized"));
}
GCHandle.InternalCompareExchange(handle, value, oldValue, false);
GC.KeepAlive(this);
}
}
...
}
The thing that's bugging me is this - why are they checking the validity of m_handle twice? Particularly in the 'set' method - the use of the GC.KeepAlive at the end of the method should keep the WeakReference from being garbage collected, and thus keep the handle non-zero - right?
And in the case of the 'get' - once we've actually retrieved a reference to the target via InternalGet, why bother checking the original m_handle value again? All I can think is that perhaps they're trying to guard against the WeakReference being disposed and finalized either during or after the InternalGet - but surely, couldn't it also be disposed and finalized before we get around to returning the object? I just can't come up w/ a valid explanation as to why this double-checking is necessary here...
All I can think is that perhaps they're trying to guard against the
WeakReference being disposed and finalized either during or after the
InternalGet
That's exactly right.
but surely, couldn't it also be disposed and finalized
before we get around to returning the object?
No, because at that point, a strong pointer must have been created to the object. InternalGet returns a strong pointer, and if that strong pointer, stored in oldValue, is to the object, now the object can no longer be reclaimed by the garbage collector.

C# -My application and Interopability (DLL/COM) with an external application

I've been developing a C# application that uses DLL interop to an external database application.
This external app starts up at the same time along with my C# app and is available as long as my C# app is running.
Now the real question is related to managing the objects that I need to create to interact with the external application.
When I declare objects that are available from the referenced DLL's these objects have methods that operate with files (that are proprietary) and run some queries (like if did it by this external app GUI). These objects are destroyed "by me" using Marshal.ReleaseComObject(A_OBJECT) while others run in a diferent application domain, by using AppDomain.CreateDomain("A_DOMAIN"), do the operations and call an AppDomain.Unload("A_DOMAIN"), releasing the DLLs used for the operation...
These workarounds are made to ensure that this external application doesn't "block" files used in these operations, therefore allowing deletion or moving them from a folder.
e.g.
private static ClientClass objApp = new ClientClass();
public bool ImportDelimitedFile(
string fileToImport,
string outputFile,
string rdfFile)
{
GENERICIMPORTLib import = new GENERICIMPORTLibClass();
try
{
import.ImportDelimFile(fileToImport, outputFile, 0, "", rdfFile, 0);
return true;
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
return false;
}
finally
{
System.Runtime.InteropServices.Marshal.ReleaseComObject(import);
import = null;
}
}
public int DbNumRecs(string file)
{
if (!File.Exists(file))
{
return -1;
}
System.AppDomain newDomain = System.AppDomain.CreateDomain();
COMMONIDEACONTROLSLib db = new COMMONIDEACONTROLSLibClass();
try
{
db = objApp.OpenDatabase(file);
int count = (int)db.Count;
db.Close();
objApp.CloseDatabase(file);
return count;
}
catch (Exception ex)
{
return -1;
}
finally
{
System.AppDomain.Unload(newDomain);
GC.Collect();
GC.WaitForPendingFinalizers();
}
}
Both of these "solutions" were reached by trial and error, due to the fact I do not possess any kind of API manual. Are these solutions correct? Can you explain me the differences? Do I really need to work with both solutions or one should suffice?
Thanks!
Your use of AppDomains is wrong. Just because you create a new AppDomain before line X doesn't mean that line X is actually executing in that AppDomain.
You need to marshall a proxy class back across your AppDomain and use it in the current one.
public sealed class DatabaseProxy : MarshallByRefObject
{
public int NumberOfRecords()
{
COMMONIDEACONTROLSLib db = new COMMONIDEACONTROLSLibClass();
try
{
db = objApp.OpenDatabase(file);
int count = (int)db.Count;
db.Close();
objApp.CloseDatabase(file);
return count;
}
catch (Exception ex)
{
return -1;
}
}
}
and
public int NumberOfRecords()
{
System.AppDomain newDomain = null;
try
{
newDomain = System.AppDomain.CreateDomain();
var proxy = newDomain.CreateInstanceAndUnwrap(
typeof(DatabaseProxy).Assembly.FullName,
typeof(DatabaseProxy).FullName);
return proxy.NumberOfRecords();
}
finally
{
System.AppDomain.Unload(newDomain);
}
}
You can actually create an marshall back the COM object itself instead of instantiating it via your proxy. This code is completely written here and not tested, so may be buggy.
The first solution is the best one. Unmanaged COM uses a reference-counting scheme; IUnknown is the underlying reference-counting interface: http://msdn.microsoft.com/en-us/library/ms680509(VS.85).aspx. When the reference count reaches zero, it is freed.
When you create a COM object in .NET, a wrapper is created around the COM object. The wrapper maintains a pointer to the underlying IUnknown. When garbage collection occurs, the wrapper will call the underlying IUnknown::Release() function to free the COM object during finalization. As you noticed, the problem is that sometimes the COM object locks certain critical resources. By calling Marshal.ReleaseComObject, you force an immediate call to IUnknown::Release without needing to wait (or initiate) a general garbage collection. If no other references to the COM object are held, then it will immediately be freed. Of course, the .NET wrapper becomes invalid after this point.
The second solution apparently works because of the call to GC.Collect(). The solution is more clumsy, slower, and less reliable (the COM object might not necessarily be garbage collected: behavior is dependent on the specific .NET Framework version). The use of AppDomain contributes nothing as your code doesn't actually do anything apart from creating an empty domain and then unloading it. AppDomains are useful for isolating loaded .NET Framework assemblies. Because unmanaged COM code is involved, AppDomains won't really be useful (if you need isolation, use process isolation). The second function can probably be rewritten as:
public int DbNumRecs(string file) {
if (!File.Exists(file)) {
return -1;
}
// don't need to use AppDomain
COMMONIDEACONTROLSLib db = null; // don't need to initialize class here
try {
db = objApp.OpenDatabase(file);
return (int)db.Count;
} catch (Exception) } // don't need to declare unused ex variable
return -1;
} finally {
try {
if (db != null) {
db.Close();
Marshal.ReleaseComObject(db);
}
objApp.CloseDatabase(file); // is this line really needed?
} catch (Exception) {} // silently ignore exceptions when closing
}
}

Finalizer launched while its object was still being used

Summary: C#/.NET is supposed to be garbage collected. C# has a destructor, used to clean resources. What happen when an object A is garbage collected the same line I try to clone one of its variable members? Apparently, on multiprocessors, sometimes, the garbage collector wins...
The problem
Today, on a training session on C#, the teacher showed us some code which contained a bug only when run on multiprocessors.
I'll summarize to say that sometimes, the compiler or the JIT screws up by calling the finalizer of a C# class object before returning from its called method.
The full code, given in Visual C++ 2005 documentation, will be posted as an "answer" to avoid making a very very large questions, but the essential are below:
The following class has a "Hash" property which will return a cloned copy of an internal array. At is construction, the first item of the array has a value of 2. In the destructor, its value is set to zero.
The point is: If you try to get the "Hash" property of "Example", you'll get a clean copy of the array, whose first item is still 2, as the object is being used (and as such, not being garbage collected/finalized):
public class Example
{
private int nValue;
public int N { get { return nValue; } }
// The Hash property is slower because it clones an array. When
// KeepAlive is not used, the finalizer sometimes runs before
// the Hash property value is read.
private byte[] hashValue;
public byte[] Hash { get { return (byte[])hashValue.Clone(); } }
public Example()
{
nValue = 2;
hashValue = new byte[20];
hashValue[0] = 2;
}
~Example()
{
nValue = 0;
if (hashValue != null)
{
Array.Clear(hashValue, 0, hashValue.Length);
}
}
}
But nothing is so simple...
The code using this class is wokring inside a thread, and of course, for the test, the app is heavily multithreaded:
public static void Main(string[] args)
{
Thread t = new Thread(new ThreadStart(ThreadProc));
t.Start();
t.Join();
}
private static void ThreadProc()
{
// running is a boolean which is always true until
// the user press ENTER
while (running) DoWork();
}
The DoWork static method is the code where the problem happens:
private static void DoWork()
{
Example ex = new Example();
byte[] res = ex.Hash; // [1]
// If the finalizer runs before the call to the Hash
// property completes, the hashValue array might be
// cleared before the property value is read. The
// following test detects that.
if (res[0] != 2)
{
// Oops... The finalizer of ex was launched before
// the Hash method/property completed
}
}
Once every 1,000,000 excutions of DoWork, apparently, the Garbage Collector does its magic, and tries to reclaim "ex", as it is not anymore referenced in the remaning code of the function, and this time, it is faster than the "Hash" get method. So what we have in the end is a clone of a zero-ed byte array, instead of having the right one (with the 1st item at 2).
My guess is that there is inlining of the code, which essentially replaces the line marked [1] in the DoWork function by something like:
// Supposed inlined processing
byte[] res2 = ex.Hash2;
// note that after this line, "ex" could be garbage collected,
// but not res2
byte[] res = (byte[])res2.Clone();
If we supposed Hash2 is a simple accessor coded like:
// Hash2 code:
public byte[] Hash2 { get { return (byte[])hashValue; } }
So, the question is: Is this supposed to work that way in C#/.NET, or could this be considered as a bug of either the compiler of the JIT?
edit
See Chris Brumme's and Chris Lyons' blogs for an explanation.
http://blogs.msdn.com/cbrumme/archive/2003/04/19/51365.aspx
http://blogs.msdn.com/clyon/archive/2004/09/21/232445.aspx
Everyone's answer was interesting, but I couldn't choose one better than the other. So I gave you all a +1...
Sorry
:-)
Edit 2
I was unable to reproduce the problem on Linux/Ubuntu/Mono, despite using the same code on the same conditions (multiple same executable running simultaneously, release mode, etc.)
It's simply a bug in your code: finalizers should not be accessing managed objects.
The only reason to implement a finalizer is to release unmanaged resources. And in this case, you should carefully implement the standard IDisposable pattern.
With this pattern, you implement a protected method "protected Dispose(bool disposing)". When this method is called from the finalizer, it cleans up unmanaged resources, but does not attempt to clean up managed resources.
In your example, you don't have any unmanaged resources, so should not be implementing a finalizer.
What you're seeing is perfectly natural.
You don't keep a reference to the object that owns the byte array, so that object (not the byte array) is actually free for the garbage collector to collect.
The garbage collector really can be that aggressive.
So if you call a method on your object, which returns a reference to an internal data structure, and the finalizer for your object mess up that data structure, you need to keep a live reference to the object as well.
The garbage collector sees that the ex variable isn't used in that method any more, so it can, and as you notice, will garbage collect it under the right circumstances (ie. timing and need).
The correct way to do this is to call GC.KeepAlive on ex, so add this line of code to the bottom of your method, and all should be well:
GC.KeepAlive(ex);
I learned about this aggressive behavior by reading the book Applied .NET Framework Programming by Jeffrey Richter.
this looks like a race condition between your work thread and the GC thread(s); to avoid it, i think there are two options:
(1) change your if statement to use ex.Hash[0] instead of res, so that ex cannot be GC'd prematurely, or
(2) lock ex for the duration of the call to Hash
that's a pretty spiffy example - was the teacher's point that there may be a bug in the JIT compiler that only manifests on multicore systems, or that this kind of coding can have subtle race conditions with garbage collection?
I think what you are seeing is reasonable behavior due to the fact that things are running on multiple threads. This is the reason for the GC.KeepAlive() method, which should be used in this case to tell the GC that the object is still being used and that it isn't a candidate for cleanup.
Looking at the DoWork function in your "full code" response, the problem is that immediately after this line of code:
byte[] res = ex.Hash;
the function no longer makes any references to the ex object, so it becomes eligible for garbage collection at that point. Adding the call to GC.KeepAlive would prevent this from happening.
Yes, this is an issue that has come up before.
Its even more fun in that you need to run release for this to happen and you end up stratching your head going 'huh, how can that be null?'.
Interesting comment from Chris Brumme's blog
http://blogs.msdn.com/cbrumme/archive/2003/04/19/51365.aspx
class C {<br>
IntPtr _handle;
Static void OperateOnHandle(IntPtr h) { ... }
void m() {
OperateOnHandle(_handle);
...
}
...
}
class Other {
void work() {
if (something) {
C aC = new C();
aC.m();
... // most guess here
} else {
...
}
}
}
So we can’t say how long ‘aC’ might live in the above code. The JIT might report the reference until Other.work() completes. It might inline Other.work() into some other method, and report aC even longer. Even if you add “aC = null;” after your usage of it, the JIT is free to consider this assignment to be dead code and eliminate it. Regardless of when the JIT stops reporting the reference, the GC might not get around to collecting it for some time.
It’s more interesting to worry about the earliest point that aC could be collected. If you are like most people, you’ll guess that the soonest aC becomes eligible for collection is at the closing brace of Other.work()’s “if” clause, where I’ve added the comment. In fact, braces don’t exist in the IL. They are a syntactic contract between you and your language compiler. Other.work() is free to stop reporting aC as soon as it has initiated the call to aC.m().
That's perfectly nornal for the finalizer to be called in your do work method as after the
ex.Hash call, the CLR knows that the ex instance won't be needed anymore...
Now, if you want to keep the instance alive do this:
private static void DoWork()
{
Example ex = new Example();
byte[] res = ex.Hash; // [1]
// If the finalizer runs before the call to the Hash
// property completes, the hashValue array might be
// cleared before the property value is read. The
// following test detects that.
if (res[0] != 2) // NOTE
{
// Oops... The finalizer of ex was launched before
// the Hash method/property completed
}
GC.KeepAlive(ex); // keep our instance alive in case we need it.. uh.. we don't
}
GC.KeepAlive does... nothing :) it's an empty not inlinable /jittable method whose only purpose is to trick the GC into thinking the object will be used after this.
WARNING: Your example is perfectly valid if the DoWork method were a managed C++ method... You DO have to manually keep the managed instances alive manually if you don't want the destructor to be called from within another thread. IE. you pass a reference to a managed object who is going to delete a blob of unmanaged memory when finalized, and the method is using this same blob. If you don't hold the instance alive, you're going to have a race condition between the GC and your method's thread.
And this will end up in tears. And managed heap corruption...
The Full Code
You'll find below the full code, copy/pasted from a Visual C++ 2008 .cs file. As I'm now on Linux, and without any Mono compiler or knowledge about its use, there's no way I can do tests now. Still, a couple of hours ago, I saw this code work and its bug:
using System;
using System.Threading;
public class Example
{
private int nValue;
public int N { get { return nValue; } }
// The Hash property is slower because it clones an array. When
// KeepAlive is not used, the finalizer sometimes runs before
// the Hash property value is read.
private byte[] hashValue;
public byte[] Hash { get { return (byte[])hashValue.Clone(); } }
public byte[] Hash2 { get { return (byte[])hashValue; } }
public int returnNothing() { return 25; }
public Example()
{
nValue = 2;
hashValue = new byte[20];
hashValue[0] = 2;
}
~Example()
{
nValue = 0;
if (hashValue != null)
{
Array.Clear(hashValue, 0, hashValue.Length);
}
}
}
public class Test
{
private static int totalCount = 0;
private static int finalizerFirstCount = 0;
// This variable controls the thread that runs the demo.
private static bool running = true;
// In order to demonstrate the finalizer running first, the
// DoWork method must create an Example object and invoke its
// Hash property. If there are no other calls to members of
// the Example object in DoWork, garbage collection reclaims
// the Example object aggressively. Sometimes this means that
// the finalizer runs before the call to the Hash property
// completes.
private static void DoWork()
{
totalCount++;
// Create an Example object and save the value of the
// Hash property. There are no more calls to members of
// the object in the DoWork method, so it is available
// for aggressive garbage collection.
Example ex = new Example();
// Normal processing
byte[] res = ex.Hash;
// Supposed inlined processing
//byte[] res2 = ex.Hash2;
//byte[] res = (byte[])res2.Clone();
// successful try to keep reference alive
//ex.returnNothing();
// Failed try to keep reference alive
//ex = null;
// If the finalizer runs before the call to the Hash
// property completes, the hashValue array might be
// cleared before the property value is read. The
// following test detects that.
if (res[0] != 2)
{
finalizerFirstCount++;
Console.WriteLine("The finalizer ran first at {0} iterations.", totalCount);
}
//GC.KeepAlive(ex);
}
public static void Main(string[] args)
{
Console.WriteLine("Test:");
// Create a thread to run the test.
Thread t = new Thread(new ThreadStart(ThreadProc));
t.Start();
// The thread runs until Enter is pressed.
Console.WriteLine("Press Enter to stop the program.");
Console.ReadLine();
running = false;
// Wait for the thread to end.
t.Join();
Console.WriteLine("{0} iterations total; the finalizer ran first {1} times.", totalCount, finalizerFirstCount);
}
private static void ThreadProc()
{
while (running) DoWork();
}
}
For those interested, I can send the zipped project through email.

Categories

Resources