Why .NET Object has method Finalize()? - c#

I know that Finalize method is used by garbage collector to let object free up unmanaged resources. And from what I know, Object.Finalize is never called directly by GC (object is added to f-reachable queue during it's construction if it's type overrides the Finalize method by implementing finalizer).
Object.Finalize is only called from autogenerated finalizer code:
try
{
//My class finalize implementation
}
finally
{
base.Finalize(); // Here chain of base calls will eventually reach Object.Finalize/
}
So having an arbitrary class, derived from Object, wouldn't call Object.Finalize - you need finalizer for Object.Finalize to make sense and for most classes it doesn't make sense and is unused (not saying it's implementation is empty in fact).
Would it be too complex to check existence of Finalize method in a class without it overriding Object.Finalize, and generating root finalizer without try{}finally{base.Finalize()} call? Something similar to Add method for collection initializing - you don't have to implement any interface or override that method - just implement public void Add(item) method.
It would complicate C# compiler a bit, but make finalizer run slightly faster by removing one redundant call, and most importantly - make Object class easier to understand without having protected Finalize method with empty implementation while it doesn't need to finalize anything.
Also it might be possible to implement FinalizableObject class, derived from Object and make compiler derive all classes which have finalizer from that. It could implement IDisposable and make the disposing pattern, recommended by Microsoft reusable without need to implement it in every class. Actually I'm surprised such base class doesn't exist.

Edit
The garbage collection does not call the child implementation of Object.Finalise unless the method is overridden. Why is it available to all objects? So that it can be overridden when needed but unless it is there is no performance impact. Looking at documentation here, it states;
The Object class provides no implementation for the Finalize method, and the garbage collector does not mark types derived from Object for finalization unless they override the Finalize method.
Notes on finalization
Quoting directly from Ben Watson's excellent book Writing High-Performance .NET Code as he explains far better than I ever could;
Never implement a finalizer unless it is required. Finalizers are code, triggered by the garbage collector to cleanup unmanaged resources. They are called from a single thread, one after the other, and only after the garbage collector declares the object dead after a collection. This means that if your class implements a finalizer, you are guaranteeing that it will stay in memory even after the collection that should have killed it. This decreases overall GC efficiency and ensures that your program will dedicate CPU resources to cleaning up your object.
If you do implement a finalizer, you must also implement the IDisposable
interface to enable explicit cleanup, and call GC.SuppressFinalize(this)
in the Dispose method to remove the object from the finalization queue.
As long as you call Dispose before the next collection, then it will clean up the object properly without the need for the finalizer to run. The following example correctly demonstrates this pattern;
class Foo : IDisposable
{
~Foo()
{
Dispose(false);
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
this.managedResource.Dispose();
}
// Cleanup unmanaged resource
UnsafeClose(this.handle);
// If the base class is IDisposable object, make sure you call:
// base.Dispose();
}
}
Note Some people think that finalizers are guaranteed to run. This is generally true, but not absolutely so. If a program is force-terminated
then no more code runs and the process dies immediately. There is also a time limit to how long all of the finalizers are given on process shutdown. If your finalizer is at the end of the list, it may be skipped. Moreover,
because finalizers execute sequentially, if another finalizer has an infinite loop bug in it, then no finalizers after it will ever run. While finalizers are not run on a GC thread, they are triggered by a GC so if you have no collections, the finalizers will not run. Therefore, you should not rely on finalizers to clean up state external to your process.
Microsoft has a good write up on finalizers and the Disposable pattern here

The C# language destructor syntax obscures too much about what a finalizer really does. Perhaps best demonstrated with a sample program:
using System;
class Program {
static void Main(string[] args) {
var obj = new Example();
obj = null; // Avoid debugger extending its lifetime
GC.Collect();
GC.WaitForPendingFinalizers();
Console.ReadLine();
}
}
class Base { ~Base() { Console.WriteLine("Base finalizer called"); } }
class Derived : Base { ~Derived() { Console.WriteLine("Derived finalizer called"); } }
class Example : Derived { }
Output:
Derived finalizer called
Base finalizer called
There are some noteworthy things about this behavior. The Example class itself does not have a finalizer, yet its base class finalizers are called anyway. That the Derived class finalizer is called before the Base class finalizer is not accidental. And note that the Derived class' finalizer has no call to base.Finalize(), even though the MSDN article for Object.Finalize() demands that it does, yet it is called anyway.
You may easily recognize this behavior, it is the way a virtual method behaves. One whose override calls the base method, like virtual method overrides commonly do. Which is otherwise exactly what it is inside the CLR, Finalize() is a plain virtual method like any other. The actual code generated by the C# compiler for the Derived class' destructor resembles this:
protected override Derived.Finalize() {
try {
Console.WriteLine("Derived finalizer called");
}
finally {
base.Finalize();
}
}
Not valid code, but the way it could be reverse-engineered from the MSIL. The C# syntax sugar ensures you can never forget to call the base finalizer and that it can't be aborted by a thread abort or AppDomain unload. The C# compiler does not otherwise help and auto-generate a finalizer for the Example class; the CLR does the necessary work of finding the finalizer of the most-derived class, traversing the method tables of the base classes until it finds one. And likewise helps in the class loader by setting a flag to indicate that Example has base classes with a finalizer so needs to be treated specially by the GC. The Base class finalizer calls Object.Finalize(), even though it doesn't do anything.
So key point is that Finalize() is actually a virtual method. It therefore needs a slot in the method table for Object so a derived class can override it. Whether it could have been done differently is pretty subjective. Certainly not easily and not without forcing every language implementation to special-case it.

Related

Why does the traditional Dispose pattern suppress finalize?

Assuming this as the traditional Dispose pattern (taken from devx but seen on many websites)
class Test : IDisposable
{
private bool isDisposed = false;
~Test()
{
Dispose(false);
}
protected void Dispose(bool disposing)
{
if (disposing)
{
// Code to dispose the managed resources of the class
}
// Code to dispose the un-managed resources of the class
isDisposed = true;
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
}
I don't understand why we call GC.SupressFinalize(this). This requires me to write my own managed resource disposal, including nulling my references? I'm a bit lost, I must admit. Could someone shed some light on this pattern?
Ideally, I would like to only dispose my unmanaged resources and let the GC do the managed collecting by itself.
Actually, I don't even know why we specify a finalizer. In any case, the coder should call dispose himself, now shouldn't he? If that's just a fallback mechanism, I'd remove it.
The IDisposable pattern is used so that the object can clean up its resources deterministically, at the point when the Dispose method is called by the client code.
The finaliser is only there as a fallback in case the client code fails to call Dispose for some reason.
If the client code calls Dispose then the clean-up of resources is performed there-and-then and doesn't need to be done again during finalisation. Calling SuppressFinalize in this situation means that the object no longer incurs the extra GC cost of finalisation.
And, if your own class only uses managed resources then a finaliser is completely unnecessary: The GC will take care of any managed resources, let those resources themselves worry about whether they need a fallback finaliser. You should only consider a finaliser in your own class if it directly handles unmanaged resources.
SuppressFinalize only suppresses any custom finalizer.
It does not alter any other GC behavior.
You never need to explicitly null out references. (Unless you want them to be collected early)
There is no difference between a class without any finalizer and an instance on which you've called SuppressFinalize.
Calling SuppressFinalize prevents an extra call to Dispose(false), and makes the GC somewhat faster. (finalizers are expensive)
Note that classes without unmanaged resources should not have a finalizer. (They should still call SuppressFinalize, unless they're sealed; this allows inherited classes to add unmanaged resources)
The SuppressFinalize call exists in case some derived class decides to add a finalizer. If a normal dispose completes successfully, finalization won't be necessary; even if a derived class decides to add one, the SuppressFinalize call will prevent it from executing and interfering with garbage collection.
To understand why this is important, you should think of finalization not as being part of garbage collection, but rather something that happens before it. When a class registers for finalization (automatic on creation, if it overrides Finalize) it is put into a special list called the Finalization Queue. No object in the Finalization Queue, nor any object referenced directly or indirectly by an object in the queue, can be garbage-collected, but if any object in the finalization queue is found to have no rooted references other than from the queue, the object will be pulled from the queue and the finalizer will run. While the finalizer is being dispatched, the object will not be collectable (since a reference will exist during the dispatch); once the finalizer is complete, there will usually not be any references to the object anymore, so it (and objects referenced thereby) will usually be collectable.
Personally, I think the SuppressFinalize is silly, since I can think of no good reason why a derived class should ever have a finalizer. If a derived class is going to add souse unmanaged resources(*) which the parent class will know nothing about, another class should be created for the purpose of holding those resources; the parent class should hold a reference to that. That way, the parent class itself will not need finalization, and objects which are referenced by the parent class won't be needlessly blocked from garbage collection.
From Msdn :
"
This method sets a bit in the object header, which the system checks when calling finalizers. The obj parameter is required to be the caller of this method.
Objects that implement the IDisposable interface can call this method from the IDisposable.Dispose method to prevent the garbage collector from calling Object.Finalize on an object that does not require it.
"
So it prevents an extra call from the GC. If it is called from within the the finalizer method, when object is being finalized, then it wont do anything, as it is already being finalised.
Otherwise, the GC is allowed to reclaim memory, without finalisation of the object, thus making things faster.
As noted on MSDN executing the Finalize method is costly. By calling dispose you've already self finalized your class so the finalizer doesn't need to be called. The finalizer is implemented in case the Dispose is never called directly by your code (or whoever 'owns' the instance).
// If the monitor.Dispose method is not called, the example displays the following output:
// ConsoleMonitor instance....
// The ConsoleMonitor class constructor.
// The Write method.
// The ConsoleMonitor finalizer.
// The Dispose(False) method.
// Disposing of unmanaged resources.
//
// If the monitor.Dispose method is called, the example displays the following output:
// ConsoleMonitor instance....
// The ConsoleMonitor class constructor.
// The Write method.
// The Dispose method.
// The Dispose(True) method.
// Disposing of managed resources.
// Disposing of unmanaged resources.
From https://msdn.microsoft.com/en-us/library/system.gc.suppressfinalize(v=vs.110).aspx

Why should Dispose() be non-virtual?

I'm new to C#, so apologies if this is an obvious question.
In the MSDN Dispose example, the Dispose method they define is non-virtual. Why is that? It seems odd to me - I'd expect that a child class of an IDisposable that had its own non-managed resources would just override Dispose and call base.Dispose() at the bottom of their own method.
Thanks!
Typical usage is that Dispose() is overloaded, with a public, non-virtual Dispose() method, and a virtual, protected Dispose(bool). The public Dispose() method calls Dispose(true), and subclasses can use this protected virtual method to free up their own resorces, and call base.Dispose(true) for parent classes.
If the class owning the public Dispose() method also implements a finalizer, then the finalizer calls Dispose(false), indicating that the protected Dispose(bool) method was called during garbage collection.
If there is a finalizer, then the public Dispose() method is also responsible for calling GC.SuppressFinalize() to make sure that the finalizer is no longer active, and will never be called. This allows the garbage collector to treat the class normally. Classes with active finalizers generally get collected only as a last resort, after gen0, gen1, and gen2 cleanup.
This is certainly not an obvious one. This pattern was especially choosen because it works well in the following scenario's:
Classes that don't have a finalizer.
Classes that do have a finalizer.
Classes that can be inheritted from.
While a virtual Dispose() method will work in the scenario where classes don't need finalization, it doesn't work well in the scenario were you do need finalization, because those types often need two types of clean-up. Namely: managed cleanup and unmanaged cleanup. For this reason the Dispose(bool) method was introduced in the pattern. It prevents duplication of cleanup code (this point is missing from the other answers), because the Dispose() method will normally cleanup both managed and unmanaged resources, while the finalizer can only cleanup unmanaged resources.
Although methods in an interface are not "virtual" in the usual sense, they can nevertheless still be implemented in classes that inherit them. This is apparently a convenience built into the C# language, allowing the creation of interface methods without requiring the virtual keyword, and implementing methods without requiring the override keyword.
Consequently, although the IDisposable interface contains a Dispose() method, it does not have the virtual keyword in front of it, nor do you have to use the override keyword in the inheriting class to implement it.
The usual Dispose pattern is to implement Dispose in your own class, and then call Dispose in the base class so that it can release the resources it owns, and so on.
A type's Dispose method should release
all the resources that it owns. It
should also release all resources
owned by its base types by calling its
parent type's Dispose method. The
parent type's Dispose method should
release all resources that it owns and
in turn call its parent type's Dispose
method, propagating this pattern
through the hierarchy of base types.
http://msdn.microsoft.com/en-us/library/fs2xkftw.aspx
The Dispose method should not be virtual because it's not an extension point for the pattern to implement disposable. That means that the base disposable class in a hierarchy will create the top-level policy (the algorithm) for dispose and will delegate the details to the other method (Dispose(bool)). This top-level policy is stable and should not be overridden by child classes. If you allow child classes to override it, they might not call all the necessary pieces of the algorithm, which might leave the object in an inconsistent state.
This is akin to the template method pattern, in which a high-level method implements an algorithm skeleton and delegates the details to other overridable methods.
As a side note, I prefer another high-level policy for this particular pattern (which still uses a non-virtual Dispose).
Calls through an interface are always virtual, regardless of whether a "normal" call would be direct or virtual. If the method that actually does the work of disposing isn't virtual except when called via the interface, then any time the class wants to dispose itself it will have to make sure to cast its self-reference to iDisposable and call that.
In the template code, the non-virtual Dispose function is expected to always be the same in the parent and the child [simply calling Dispose(True)], so there's never any need to override it. All the work is done in the virtual Dispose(Boolean).
Frankly, I think using the Dispose pattern is a little bit silly in cases where there's no reason to expect descendant classes to directly hold unmanaged resources. In the early days of .net it was often necessary for classes to directly hold unmanaged resources, but today in most situations I see zero loss from simply implementing Dispose() directly. If a future descendant class needs to use unmanaged resources, it can and typically should wrap those resources in their own Finalizable objects.
On the other hand, for certain kinds of method there can be advantages to having a non-virtual base class method whose job is to chain to a protected virtual method, and having the virtual method be called Dispose(bool) is really no worse than VirtDispose() even if the supplied argument is rather useless. In some situations, for example, it may be necessary for all operations on an object to be guarded by a lock which is owned by the base-class object. Having the non-virtual base-class Dispose acquire the lock before calling the virtual method will free all the base classes from having to worry about the lock themselves.
The reason the sample's Dispose() method is non-virtual is because they take over the entire process in that example, and leave subclasses with the virtual Dispose(bool disposing) method to override. You'll notice that in the example, it stores a boolean field to ensure that the Dispose logic does not get invoked twice (potentially once from IDisposable, and once from the destructor). Subclasses who override the provided virtual method do not have to worry about this nuance. This is why the main Dispose method in the example is non-virtual.
I've got a quite detailed explanation of the dispose pattern here. Essentially, you provide a protected method to override that is more robust for unmanaged resources instead.
If the base class has resources that need to be cleaned up at Dispose() time, then having a virtual Dispose method that's overridden by an inheriting class prevents those resources from being released unless the inheriting class specifically calls the
base's Dispose method. A better way would implement it would be to have each derived class implement IDisposable.
Another, not so obvious reason is to avoid the need to suppress CA1816 warnings for derived classes. These warnings look like this
[CA1816] Change Dispose() to call GC.SuppressFinalize(object). This will prevent derived types that introduce a finalizer from needing to re-implement 'IDisposable' to call it.
Here is an example
class Base : IDisposable
{
public virtual void Dispose()
{
...
GC.SuppressFinalize(this);
}
}
public class Derived : Base
{
public override void Dispose() // <- still warns for CA1816
{
base.Dispose();
...
}
}
You can resolve this by just adopting the recommended Dispose pattern.

Why should we call SuppressFinalize when we don't have a destructor

I have few Question for which I am not able to get a proper answer .
1) Why should we call SuppressFinalize in the Dispose function when we don't have a destructor .
2) Dispose and finalize are used for freeing resources before the object is garbage collected. Whether it is managed or unmanaged resource we need to free it , then why we need a condition inside the dispose function , saying pass 'true' when we call this overridden function from IDisposable:Dispose and pass false when called from a finalize.
See the below code I copied from net.
class Test : IDisposable
{
private bool isDisposed = false;
~Test()
{
Dispose(false);
}
protected void Dispose(bool disposing)
{
if (disposing)
{
// Code to dispose the managed resources of the class
}
// Code to dispose the un-managed resources of the class
isDisposed = true;
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
}
what if I remove the boolean protected Dispose function and implement the as below.
class Test : IDisposable
{
private bool isDisposed = false;
~Test()
{
Dispose();
}
public void Dispose()
{
// Code to dispose the managed resources of the class
// Code to dispose the un-managed resources of the class
isDisposed = true;
// Call this since we have a destructor . what if , if we don't have one
GC.SuppressFinalize(this);
}
}
I'm going out on a limb here, but... most people don't need the full-blown dispose pattern. It's designed to be solid in the face of having direct access to unmanaged resources (usually via IntPtr) and in the face of inheritance. Most of the time, neither of these is actually required.
If you're just holding a reference to something else which implements IDisposable, you almost certainly don't need a finalizer - whatever holds the resource directly is responsible for dealing with that. You can make do with something like this:
public sealed class Foo : IDisposable
{
private bool disposed;
private FileStream stream;
// Other code
public void Dispose()
{
if (disposed)
{
return;
}
stream.Dispose();
disposed = true;
}
}
Note that this isn't thread-safe, but that probably won't be a problem.
By not having to worry about the possibility of subclasses holding resources directly, you don't need to suppress the finalizer (because there isn't one) - and you don't need to provide a way of subclasses customising the disposal either. Life is simpler without inheritance.
If you do need to allow uncontrolled inheritance (i.e. you're not willing to bet that subclasses will have very particular needs) then you need to go for the full pattern.
Note that with SafeHandle from .NET 2.0, it's even rarer that you need your own finalizer than it was in .NET 1.1.
To address your point about why there's a disposing flag in the first place: if you're running within a finalizer, other objects you refer to may already have been finalized. You should let them clean up themselves, and you should only clean up the resources you directly own.
Here are the main facts
1) Object.Finalize is what your class overrides when it has a Finalizer. the ~TypeName() destructor method is just shorthand for 'override Finalize()' etc
2) You call GC.SuppressFinalize if you are disposing of resources in your Dispose method before finalization (i.e. when coming out of a using block etc). If you do not have a Finalizer, then you do not need to do this. If you have a Finalizer, this ensures that the object is taken off of the Finalization queue (so we dont dispose of stuff twice as the Finalizer usually calls the Dispose method as well)
3) You implement a Finalizer as a 'fail safe' mechanism. Finalizers are guaranteed to run (as long as the CLR isnt aborted), so they allow you to make sure code gets cleaned up in the event that the Dispose method was not called (maybe the programmer forgot to create the instance within a 'using' block etc.
4) Finalizers are expensive as Types that have finalizers cant be garbage collected in a Generation-0 collection (the most efficient), and are promoted to Generation-1 with a reference to them on the F-Reachable queue, so that they represent a GC root. it's not until the GC performs a Generation-1 collection that the finalizer gets called, and the resources are released - so implement finalizers only when very important - and make sure that objects that require Finalization are as small as possible - because all objects that can be reached by your finalizable object will be promoted to Generation-1 also.
Keep the first version, it is safer and is the correct implementation of the dispose pattern.
Calling SuppressFinalize tells the GC that you have done all the destruction/disposing yourself (of resources held by your class) and that it does not need to call the destructor.
You need the test in case the code using your class has already called dispose and you shouldn't tell the GC to dispose again.
See this MSDN document (Dispose methods should call SuppressFinalize).
1. Answer for the first question
Basically, you don't have to call SuppressFinalize method if your class doesn't have a finalize method (Destructor). I believe people call SupressFinalize even when there is no finalize method because of lack of knowledge.
2. Answer for the second question
Purpose of the Finalize method is to free un-managed resources. The most important thing to understand is that, Finalize method is called when the object is in the finalization queue. Garbage collector collects all the objects that can be destroy. Garbage Collector adds objects those have got finalization to the finalization queue before destroy. There is another .net background process to call the finalize method for the objects those are in the finalization queue. By the time that background process execute the finalize method, that particular object's other managed reference may have been destroyed. Because there is no specific order when it comes to the finalization execution. So, the Dispose Pattern wants to make sure that finalize method do not try to access managed objects. That's why managed objects are going in side "if (disposing)" clause which is unreachable for the finalize method.
You should always call SuppressFinalize() because you might have (or have in the future) a derived class that implements a Finalizer - in which case you need it.
Let's say you have a base class that doesn't have a Finalizer - and you decided not to call SuppressFinalize(). Then 3 months later you add a derived class that adds a Finalizer. It is likely that you will forget to go up to the base class and add a call to SuppressFinalize(). There is no harm in calling it if there is no finalizer.
My suggested IDisposable pattern is posted here: How to properly implement the Dispose Pattern

Disposing objects in the Destructor

I have an object that has a disposable object as a member.
public class MyClass
{
private MyDisposableMember member;
public DoSomething
{
using (member = new MyDisposableMember())
{
// Blah...
}
}
}
There can be many methods in MyClass, all requiring a using statement. But what if I did this instead?
public class MyClass
{
private MyDisposableMember member = new MyDisposableMember();
public DoSomething
{
// Do things with member :)
}
~MyClass()
{
member.Dispose();
}
}
As you can see, member is being disposed in the destructor. Would this work? Are there any problems with this approach?
Ideally, Dispose() should have already been called prior to finalization. It would be better to follow the typical dispose pattern, and allow the user to Dispose() the object properly, and have the finalizer Dispose of it if dispose has not already been called.
In this case, since you're encapsulating an IDisposable, you really don't need to implement the finalizer at all, though. (At the the point of finalization, your encapsulated member will get finalized, so there's no need to finalize your object - it just adds overhead.) For details, read this blog article I wrote on encapsulating an IDisposable.
You should probably make MyClass implement IDisposable. Inside the Dispose() method, call member.Dispose();. That way the programmer can have control over when the member gets disposed.
DO NOT DO THAT!
The GC will do that for you (indirectly as the object to dispose or another one will contain a destructor)
MyDisposableMember might even be disposed by the GC even before you dispose it - what happens then might not be what you intended to do.
Even worse: Adding a destructor (or finalizer) to a class costs additional time when disposing of the object (much more time as the object will stay in memory for at least one collection cyclus and maybe even promoted to the next generation).
Therfore, it would be completely useless and even backfire.
In your first example the member is not really part of the object's state since you're instantiating it every time it's used and disposing it right after. Since it's not part of the state don't model it as such just use a local variable when needed.
In more general you should put all disposal logic in Dispose() and implement IDisposable then use you class together with using or try-finally
The only thing I see wrong (and it isn't an error) is the fact that in a using statement you explicitly dispose of the object at that point in time (when your function / method is called). The destructor cannot be called they are invoked automatically. So at this point it may take some time for member to be disposed of. Better to implement the IDisposeable interface for MyClass.
Following the Microsoft pattern is your best bet so the users of your class have full control over when it is disposed.
public class MyClass : IDisposable
{
private MyDisposableMember member = new MyDisposableMember();
public DoSomething
{
// Do things with member :)
}
~MyClass()
{
Dispose(false);
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (disposing) // Release managed resources
{
member.Dispose();
}
// Release unmanaged resources
}
}
When a finalizer runs, one of the following will be true about almost any IDisposable object to which it holds a reference:
The object will have already had its finalizer run, in which case calling Dispose on the object will be at best useless.
The object will not have had its finalizer run, but its finalizer will be scheduled to run, so calling Dispose on the object will be useless.
The object will still be in use by something other than the object being finalized, so calling Dispose on it would be bad.
There are a few situations where calling Dispose in a finalizer might be useful, but most situations fit the cases listed above, which all have a common feature: the finalizer shouldn't call Dispose.

When would dispose method not get called?

I was reading this article the other day and was wondering why there was a Finalizer along with the Dispose method. I read here on SO as to why you might want to add Dispose to the Finalizer. My curiousity is, when would the Finalizer be called over the Dispose method itself? Is there a code example or is it based on something happening on the system the software is running? If so, what could happen to not have the Dispose method run by the GC.
The purpose of the finaliser here is simply a safety precaution against memory leaks (if you happen not to call Dispose explicitly). It also means you don't have to dispose your objects if you want them to release resources when the program shutdowns, since the GC will be forced to finalise and collect all objects anyway.
As a related point, it is important to dispose the object slightly differently when doing so from the finaliser.
~MyClass()
{
Dispose(false);
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected void Dispose(disposing)
{
if (!this.disposed)
{
if (disposing)
{
// Dispose managed resources here.
}
// Dispose unmanaged resources here.
}
this.disposed = true;
}
The reason you do not want to dispose managed resources in your finaliser is that you would actually be creating strong references to them in doing so, and this could prevent the GC from doing it's job properly and collecting them. Unmanaged resources (e.g. Win32 handles and such) should always be explicitly closed/disposed, of course, since the CLR has no knowledge of them.
This is mostly there to protect yourself. You cannot dictate what the end user of your class will do. By providing a finalizer in addition to a Dispose method, the GC will "Dispose" of your object, freeing your resources appropriately, even if the user forgets to call Dispose() or mis-uses your class.
The Finalizer is called when the object is garbage collected. Dispose needs to be explicitly called. In the following code the finalizer will be called but the Dispose method is not.
class Foo : IDisposable
{
public void Dispose()
{
Console.WriteLine("Disposed");
}
~Foo()
{
Console.WriteLine("Finalized");
}
}
...
public void Go()
{
Foo foo = new Foo();
}
The dispose method must be explicitly called, either by calling Dispose() or by having the object in a using statement. The GC will always call the finalizer, so if there is something that needs to happen before the objects are disposed of the finalizer should at least check to make sure that everything in the object is cleaned up.
You want to avoid cleaning up objects in the finalizer if at all possible, because it causes extra work compared to disposing them before hand (like calling dispose), but you should always at least check in the finalizer if there are objects lying around that need to be removed.
An important but subtle note not yet mentioned: a seldom-considered purpose of Dispose is to prevent an object from being cleaned up prematurely. Objects with finalizers must be written carefully, lest a finalizer run earlier than expected. A finalizer can't run before the start of the last method call that will be made on an object(*), but it might sometimes run during the last method call if the object will be abandoned once the method completes. Code which properly Dispose an object can't abandon the object before calling Dispose, so there's no danger of a finalizer wreaking havoc on code which properly uses Dispose. On the other hand, if the last method to use an object makes use of entities which will be cleaned up in the finalizer after its last use of the object reference itself, it's possible for the garbage-collector to call Finalize on the object and clean up entities that are still in use. The remedy is to ensure any call method which uses entities that are going to get cleaned up by a finalizer must be followed at some point by a method call which makes use of "this". GC.KeepAlive(this) is a good method to use for that.
(*) Non-virtual methods which are expanded to in-line code that doesn't do anything with the object may be exempt from this rule, but Dispose usually is, or invokes, a virtual method.

Categories

Resources