What's the purpose of implementing the IDisposable interface? I've seen some classes implementing it and I don't understand why.
If your class creates unmanaged resources, then you can implement IDisposable so that these resources will be cleaned up properly when the object is disposed of. You override Dispose and release them there.
When your classes makes use of some system resource, it's the class' responsibility to make sure the resource is freed too. By .Net design you're supposed to do that in the Dispose method of the class. The IDisposable interface marks that your class needs to free resource when it's no longer in use, and the Dispose method is made available so that users of your class can call it to free the consumed resources.
The IDisposable method is also essential if you want auto clean-up to work properly and want to use the using() statement.
As well as freeing unmanaged resources, objects can usefully perform some operation the moment they go out of scope. A useful example might be an timer object: such objects could print out the time elapsed since their construction in the Dispose() method. These objects could then be used to log the approximate time taken for some set of operations:
using(Timer tmr=new Timer("blah"))
{
// do whatever
}
This can be done manually, of course, but my feeling is that one should take advantage wherever possible of the compiler's ability to generate the right code automatically.
It all has to do with the garbage collection mechanism. Chris Sells describes garbage collection, finalizers, and the reason for the Dispose pattern (and the IDisposable interface) episode 10 of .NET Rocks! (starting about 34 minutes in).
Many objects manipulate other entities in ways that will cause problems if not cleaned up. These other entities may be almost anything, and they may be almost anywhere. As an example, a Socket object may ask another machine to open up a TCP connection. That other machine might not be capable of handling very many connections at once; indeed, it could be a web-equipped appliance that can only handle one connection at a time. If a program were to open a socket and simply forget about it, no other computer would be able to connect to the appliance unless or until the socket got closed (perhaps the appliance might close the socket itself after a few minutes of inactivity, but it would be useless until then).
If an object implements IDisposable, that means it has the knowledge and impetus required to perform necessary cleanup actions, and such actions need to be performed before such knowledge and impetus is lost. Calling IDisposable.Dispose will ensure that all such cleanup actions get carried out, whereupon the object may be safely abandoned.
Microsoft allows for objects to request protection from abandonment by registering a method called Finalize. If an object does so, the Finalize method will be called if the system detects that the object has been abandoned. Neither the object, nor any objects to which it holds direct or indirect references, will be erased from memory until the Finalize method has been given a chance to run. This provides something of a "backstop" in case an object is abandoned without being first Disposed. There are many traps, however, with objects that implement Finalize, since there's no guarantee as to when it will be called. Not only might an object be abandoned a long time before Finalize gets called, but if one isn't careful the system may actually call Finalize on an object while part of it is still in use. Dangerous stuff. It's far better to use Dispose properly.
Related
There are lots of examples of Arrays or Lists of IDisposable objects being returned from functions in .NET. For example, Process.GetProcesses().
If I call that method is it my responsibility to Dispose() of all the members of the array as I iterate through them?
Why should it be my responsibility since I never created the objects and the array that I was given is just pointers to the objects which were created outside of my code.
I always thought it was the creator's burden to Dispose().
So what is the proper rule here?
There is no general rule. It's going to depend on the situation, and how the method in question is designed, as to whether or not "you" are responsible for disposing of objects you have access to. This is where documentation is often important to help users of the type understand their responsibilities.
I always thought it was the creator's burden to Dispose()
This cannot be strictly true. It is sometimes the case that a disposable object will out-live the lifetime of the block of code creating it. While it simplest when the creator can dispose of the object, sometimes it's simply impossible for them to be able to. When returning a disposable object from a method is one situation where it's often not be possible for the code creating the disposable object to clean it up, as it's lifetime needs to be smaller than the lifetime of the disposable object.
With relatively few exceptions (most of which could be described as least-of-evils approaches to dealing with poorly-designed code that can't be changed), every IDisposable instance should at any given moment in time have exactly one well-defined owner. In cases where a method returns something of a type that implements IDisposable, the contract for the method will specify whether the method is relinquishing ownership (in which case the caller should ensure that the object gets disposed--either by disposing of the object itself or relinquishing ownership to someone else), or whether the method is merely returning a reference to an object which is owned by someone else.
In properly-written code, the question of whether or not an object should be disposed is rarely a judgment call. The owner of an object should ensure that it gets disposed; nobody else should dispose it. Occasionally it may be necessary to have a method accept a parameter indicating whether the method should transfer ownership of an IDisposable. For example, if code wants to create a sound, pass it to a "start playing sound" method, and never wants to deal with that sound again, it may be most convenient to have the code to play the sound accept take and dispose the sound when it's done; if code wants to be able to play a sound repeatedly, however, and will ensure that the sound object will stay alive as long as it's needed, it would be more convenient for the sound-playing code to not take ownership. Using separate methods may in some ways be cleaner, but using a parameter can aid encapsulation.
Generally, when code returns a list of objects that implement IDisposable, the purpose of the code is to identify objects without conveying any ownership interest in them. In the absence of an ownership interest, code receiving such a list should not call Dispose on it.
The GetProcesses method does not allocate any handles (or other non managed resources) within the Process instances returned.
Only if you call certain methods on the returned Process instances are handles created, in almost all cases these are released before returning (e.g. Process.Kill).
Therefore it is completely unnecessary in most situations to dispose every Process instance returned.
The rule is very simple: if you think that other programs will use your IDisposables, then do not destroy them. Otherwise, do it.
For example: GetProcesses() returns other processes being potentialy used by other programs, so you shouldn't dispose them.
From other side, files you've opened should be released for other processes in OS, so you should close and dispose wrapper streams above them (say, you should dispose steam returned by File.Open method).
Update:
From MSDN:
DO implement the Basic Dispose Pattern on types containing
instances of disposable types. See the Basic Dispose Pattern section
for details on the basic pattern. If a type is responsible for the
lifetime of other disposable objects, developers need a way to dispose
of them, too. Using the container’s Dispose method is a convenient way
to make this possible.
DO implement the Basic Dispose Pattern
and provide a finalizer on types holding resources that need to be
freed explicitly and that do not have finalizers.
For example, the pattern should be implemented on types storing unmanaged memory buffers. The Finalizable Types section discusses guidelines related to
implementing finalizers.
CONSIDER implementing the Basic Dispose Pattern on classes that themselves don’t hold unmanaged resources or disposable objects but are likely to have subtypes that do.
I was working on serializing and deserializing a class object using XML when I came across this blog post that shows how to do it on Windows Phone 7 using the isolated storage area. Windows Phone 7 is the platform I am developing for:
In this example, the only object he explicitly calls Dispose() on is the TextReader object. I looked up the TextReader object on MSDN and found that the documentation said this:
Releases the unmanaged resources used by the TextReader and optionally releases the managed resources.
So I assume the reason he does this is to release immediately the unmanaged resources acquired by the TextReader object. It would not have occurred to me to do this if it weren't for his blog post. Obviously I don't want to start calling Dispose() on every object in sight, so what is a good rule of thumb for at least investigating when a particular object should have Dispose() called on it or not? Are there some guidelines for this or a list somewhere, at least of the popular .NET objects that require this special handling?
Obviously I don't want to start calling Dispose() on every object in
Wrong.
In general, any object that implements IDisposable should be disposed as soon as you're finished with it, typically using the using statement.
Most objects that do not have unmanaged resources do not implement IDisposable (and do not have Dispose() methods), so you have nothing to worry about.
The only exceptions are base classes that implement IDisposable in case some derived implementations have something to dispose (eg, IEnumerator, Component, or TextReader).
However, it is not always obvious which concrete implementations need to be disposed (and it may change at any time), so you should always dispose them anyway.
Obviously I don't want to start calling Dispose() on every object in sight, so what is a good rule of thumb for at least investigating when a particular object should have Dispose() called on it or not?
This is not a problem. The compiler won't let you call Dispose() on an object that doesn't implement it.
And you should be calling Dispose() for every object that does implement it (which it will do via the IDisposable interface). That is the guideline you should be following. In fact, that's what it means when an object implements IDisposable: that it has unmanaged resources that need to be released.
It becomes much less of a chore if you'll simply wrap the creation and use of the objects in a using statement, e.g.:
using (DisposableObject obj = new DisposableObject(...))
{
obj.DoWork();
} // obj.Dispose() is automatically called here, even if an exception is thrown
Actually you do have to dispose of objects which implement IDisposable.
The standard way of doing that as opposed to directly calling the Dispose() is:
using(AnyIDisposable obj = ...)
{
// work with obj here
}
//The Dispose() method is already called here
Please correct me if i'm wrong.
As far a i read/understood, all clases of the .NET Framework are managed (to the view of the programmer, although underderlaying they might use unmanaged code), so theoretically you dont need to call Dispose() or using, because the gc will take care. But sometimes it's very recommended to use them, see IDisposable Interface and
Which managed classes in .NET Framework allocate (or use) unmanaged memory? and http://blogs.msdn.com/b/kimhamil/archive/2008/11/05/when-to-call-dispose.aspx
EDIT: (you are right noob) For clarification i'll add Nayan answer from IDisposable Interface
It recommended to call dispose or using, when:
1.You class has many objects and there are of lots of cross references. Even though its all managed, GC may not be able to reclaim
the memory due to alive references. You get a chance (other than
writing a finalizer) to untangle the references and break up the links
the way you attached them. Hence, you are helping the GC to reclaim
the memory.
2.You have some streams open which are alive till the object of the class dies. Even though such implementations of files/network etc are
managed, they go deep down to handles in Win32 mode. Hence, you get a
chance to write a Dispose method where you can close the streams. The
same is true for GDI objects, and some more.
3.You are writing a class which uses unmanaged resources, and you want to ship your assembly to third parties. You better use disposable
pattern to make sure you are able to free the handles to avoid the
leakage.
4.Your class implements lots of event handlers and hooks them up to events. The objects of classes which
expose the events, like Form etc., will not be freed up by GC since
the implementations local to your class (maybe) are still hooked into
those events. You can unhook those event handlers in Dispose; again
helping GC.
I've written a class which pairs up a TransactionScope with an Linq to Sql DataContext.
It implements the same methods as the TransactionScope, Dispose() and Complete() and exposes the DataContext.
It's purpose is to ensure that DataContexts are not re-used, they are paired up with a single transaction and Disposed along with it.
Should I include a Finalize method in the class? One that calls Dispose if it has not already been called? Or it that only for IDisposables that reference unmanaged resources?
No, never implement a finaliser in a class that is disposable just because it wraps a disposable class.
Consider that you have three clean-up scenarios for a class with Dispose and a finaliser:
Dispose() is called.
The finaliser is called on application shutdown.
The object was going to be collected, but the finaliser hadn't been suppressed (most often from a call to Dispose(), but note that you should always suppress your finaliser when anything puts it in a state where it doesn't need to be cleaned up, and re-registered if it is put in a state where it does need it - e.g. if you had an Open()/Close() pair of methods).
Now, if you are directly managing an unmanaged resource (e.g. a handle through an IntPtr), these three of these scenarios where you will have one of the two clean-up methods called directly match the three scenarios where you need clean-up to happen.
Okay. So, let's consider a disposable wrapping a disposable where the "outer" class has a finaliser implemented correctly:
~MyClass()
{
// This space deliberately left blank.
}
The finaliser doesn't do anything, because there's no unmanaged clean-up for it to handle. The only effect is that if this null finaliser hasn't been suppressed, then upon garbage collection it will be put in the finaliser queue - keeping it and anything only reachable through it's fields alive and promoting them to the next generation - and eventually the finalisation thread will call this nop method, mark it as having been finalised and it becomes eligible for garbage collection again. But since it was promoted it'll be Gen 1 if it had been Gen 0, and Gen 2 if it had been Gen 1.
The actual object that did need to be finalised will also be promoted, and it'll have to wait that bit longer not just for collection, but also for finalisation. It's going to end up in Gen 2 no matter what.
Okay, that's bad enough, let's say we actually put some code in the finaliser that did something with the field that holds the finalisable class.
Wait. What are we going to do? We can't call a finaliser directly, so we dispose it. Oh wait, are we sure that for this class the behaviour of Dispose() and that of the finaliser is close enough that it's safe? How do we know it doesn't hold onto some resources via weak-references that it will try to deal with in the dispose method and not in the finaliser? The author of that class knows all about the perils of dealing with weak-references from within a finaliser, but they thought they were writing a Dispose() method, not part of someone else's finaliser method.
And then, what if the outer finaliser was called during application shut-down. How do you know the inner finaliser wasn't already called? When the author of the class was writing their Dispose() method, are they sure to ponder "okay, now let's make sure I handle the case of this being called after the finaliser has already run and the only thing left for this object to do is have it's memory freed?" Not really. It might be that they guarded against repeated calls to Dispose() in such a way that also protects from this scenario, but you can't really bet on it (especially since they won't be helping that in the finaliser which they know will be the last method ever called and any other sort of cleanup like nulling fields that won't be used again to flag them as such, is pointless). It could end up doing something like dropping the reference count of some reference-counted resource, or otherwise violating the contract of the unmanaged code it is its job to deal with.
So. Best case scenario with a finaliser in such a class is that you damage the efficiency of garbage collection, and worse-case is you have a bug which interfere with the perfectly good clean-up code you were trying to help.
Note also the logic behind the pattern MS used to promote (and still have in some of their classes) where you have a protected Dispose(bool disposing) method. Now, I have a lot of bad things to say about this pattern, but when you look at it, it is designed to deal with the very fact that what you clean up with Dispose() and what you clean up in a finaliser are not the same - the pattern means that an object's directly-held unmanaged resources will be cleaned up in both cases (in the scenario of your question, there are no such resources) and that managed resources such as an internally-held IDisposable object are cleaned-up only from Dispose(), and not from a finaliser.
Implement IDisposable.Dispose() if you have anything that needs to be cleaned up, whether an unmanaged resource, an object that is IDisposable or anything else.
Write a finaliser if, and only if you directly have an unmanaged resource that needs to be cleaned up, and make cleaning it up the only thing you do there.
For bonus points, avoid being in both classes at once - wrap all unmanaged resources in disposable and finalisable classes that only deal with that unmanaged classes, and if you need to combine that functionality with other resources do it in a disposable-only class that uses such classes. That way clean-up will be clearer, simpler, less prone to bugs, and less damaging to GC efficiency (no risk of finalisation of one object delaying that of another).
A Finalizer is meant solely for cleaning up unmanaged resources. There is no use in calling the dispose of dependent object inside the finalizer, since if those objects manage critical resources, they have a finalizer by them selves.
In .NET 2.0 and up there is even less reason to implement a finalizer, no .NET contains the SafeHandle class.
However, one reason I sometimes find to still implement a finalizer, is to find out whether developers forgot to call Dispose. I let this class implement the finalizer only in the debug build and let it write to the Debug window.
There is no simple answer for this one - it is debatable.
What?
The debate is whether to use finaliser with a full Disposable pattern. Your code use transactions and database context - those guys (usually) use unmanaged resources (like kernel transaction objects and TCP/IP connections)
Why?
If you use any unmanaged resource that should be cleaned up, you should implement IDisposable. Then a client code can wrap a call to the class into the recommended using(IDisposable myClass = new MyClass(){...} construct. The problem is if a developer wouldn't call IDisposable.Dispose() explicitly or implicitly, then the resource wouldn't be free up automatically. Even if the object myClass has been collected by GC. This is because GC never calls Dispose during collection, it's the responsibility of finilisation queue.
Thus you can define a finiliser, that will eventually be called by GC finilisation thread, that is independent from garbage collection.
Opinions
Some people argue that you should just make sure you put all the disposable code into using (){} and forget about finilisation. After all you must release such resources ASAP and the whole finalisation process is kinda vague for many developers.
In contrast, I prefer to explicitly implement finilisator, simply because I don't know who will use my code. So if someone forgets to call a Dispose on the class that requires that, the resource will eventually be released.
Conclusion
Personally, I would recommend to implement finilisator with any class that implements IDisposable.
I was implementing Finalize and Dispose in my classes, I implemented IDisposable on my parent class and override the Dispose(bool) overload in my child classes. I was not sure
whether to use a duplicate isDisposed variable(as its already there in base class) or not?
Whether to implement a finalizer in child class too or not?
Both these things are done in example given here -
http://guides.brucejmack.biz/CodeRules/FxCop/Docs/Rules/Usage/DisposeMethodsShouldCallBaseClassDispose.html
Whereas example in this MSDN article doesn't have any of these two -
http://msdn.microsoft.com/en-us/library/b1yfkh5e.aspx
whereas this example in MSDN is not complete -
http://msdn.microsoft.com/en-us/library/ms182330.aspx
It's very rare for a finalizer to be useful. The documentation you link to isn't totally helpful - it offers the following rather circular advice:
Implement Finalize only on objects
that require finalization
That's an excellent example of begging the question, but it's not very helpful.
In practice, the vast majority of the time you don't want a finalizer. (One of the learning curves .NET developers have to go through is discovering that in most of the places they think they need a finalizer, they don't.) You've tagged this as (amongst other things) a WPF question, and I'd say it'd almost always be a mistake to put a finalizer on a UI object. (So even if you are in one of the unusual situations that turns out to require a finalizer, that work doesn't belong anywhere near code that concerns itself with WPF.)
For most of the scenarios in which finalizers seem like they might be useful, they turn out not to be, because by the time your finalizer runs, it's already too late for it to do anything useful.
For example it's usually a bad idea to try to do anything with any of the objects your object has a reference to, because by the time your finalizer runs, those objects may already have been finalized. (.NET makes no guarantees about the order in which finalizers run, so you simply have no way of knowing whether the objects you've got references to have been finalized.) It's bad idea to invoke a method on an object whose finalizer has already been run.
If you have some way of knowing that some object definitely hasn't been finalized, then it is safe to use it, but that's a pretty unusual situation to be in. (...unless the object in question has no finalizer, and makes use of no finalizable resources itself. But in that case, it's probably not an object you'd actually need to do anything to when your own object is going away.)
The main situation in which finalizers seem useful is interop: e.g., suppose you're using P/Invoke to call some unmanaged API, and that API returns you a handle. Perhaps there's some other API you need to call to close that handle. Since that's all unmanaged stuff, the .NET GC doesn't know what those handles are, and it's your job to make sure that they get cleaned up, at which point a finalizer is reasonable...except in practice, it's almost always best to use a SafeHandle for that scenario.
In practice, the only places I've found myself using finalizers have been a) experiments designed to investigate what the GC does, and b) diagnostic code designed to discover something about how particular objects are being used in a system. Neither kind of code should end up going into production.
So the answer to whether you need "to implement a finalizer in child class too or not" is: if you need to ask, then the answer is no.
As for whether to duplicate the flag...other answers are providing contradictory advice here. The main points are 1) you do need to call the base Dispose and 2) your Dispose needs to be idempotent. (I.e., it doesn't matter if it's called once, twice, 5 times, 100 times - it shouldn't complain if it's called more than once.) You're at liberty to implement that however you like - a boolean flag is one way, but I've often found that it's enough to set certain fields to null in my Dispose method, at which point that removes any need for a separate boolean flag - you can tell that Dispose was already called because you already set those fields to null.
A lot of the guidance out there on IDisposable is extremely unhelpful, because it addresses the situation where you need a finalizer, but that's actually a very unusual case. It means that lots of people write a IDisposable implementations that are far more complex than necessary. In practice, most classes call into the category Stephen Cleary calls "level 1" in the article that jpierson linked to. And for these, you don't need all the GC.KeepAlive, GC.SuppressFinalize, and Dispose(bool) stuff that clutters up most of the examples. Life's actually much simpler most of the time, as Cleary's advice for these "level 1" types shows.
Duplicate is needed
If you don't have any clean-up in child class simply call base.Dispose() and if there are some class level clean-up, do it after a call to base.Dispose(). You need to separate state of these two classes so there should be a IsDisposed boolean for each class. This way you can add clean-up code whenever you need.
When you determine a class as IDisposable, you simply tell GC I'm taking care of it's clean-up procedure and you should SuppressFinilize on this class so GC would remove it from it's queue. Unless you call GC.SupressFinalize(this) nothing happens special to an IDisposable class. So if you implement it as I mentioned there's no need for a Finilizer since you just told GC not to finalize it.
The correct way to implement IDisposable depends on whether you have any unmanaged resources owned by your class. The exact way to implement IDisposable is still something not all developers agree on and some like Stephen Cleary have strong opinions on the disposable paradigm in general.
see: Implementing Finalize and Dispose to Clean Up Unmanaged Resources
The documentation for IDisposable interface also explains this breifly and this article points out some of the same things but also on MSDN.
As far as whether a duplicate boolean field "isDisposed" is required in the base class. It appears that this is mainly just a useful convention that can be used when a subclass itself may add additional unmanaged resources that need to be disposed of. Since Dispose is declared virtual, calling Dispose on a subclass instance always causes that class's Dispose method to be called first which in turn calls base.Dispose as it's last step giving a chance to clear up each level in the inheritance hierarchy. So I would probably summarize this as, if your subclass has additional unmanaged resources above what is owned by the base then you will probably be best to have your own boolean isDisposed field to track it's disposal in a transactional nature inside it's Dispose method but as Ian mentions in his answer, there are other ways to represent an already-disposed state.
1) No need to duplicate
2) Implementing a finalizer will help to dispose items that are not explicitly disposed. But is not guaranteed. It is a good practice to do.
Only implement a finalizer if an object holds information about stuff needing cleanup, and this information is in some form other than Object references to other objects needing cleanup (e.g. a file handle stored as an Int32). If a class implements a finalizer, it should not hold strong Object references to any other objects which are not required for cleanup. If it would hold other references, the portion responsible for cleanup should be split off into its own object with a finalizer, and the main object should hold a reference to that. The main object should then not have a finalizer.
Derived classes should only have finalizers if the purpose of the base class was to support one. If the purpose of a class doesn't center around a finalizer, there's not much point allowing a derived class to add one, since derived classes almost certainly shouldn't (even if they need to add unmanaged resources, they should put the resources in their own class and just hold a reference to it).
When would I implement IDispose on a class as opposed to a destructor? I read this article, but I'm still missing the point.
My assumption is that if I implement IDispose on an object, I can explicitly 'destruct' it as opposed to waiting for the garbage collector to do it. Is this correct?
Does that mean I should always explicitly call Dispose on an object? What are some common examples of this?
A finalizer (aka destructor) is part of garbage collection (GC) - it is indeterminate when (or even if) this happens, as GC mainly happens as a result of memory pressure (i.e. need more space). Finalizers are usually only used for cleaning up unmanaged resources, since managed resources will have their own collection/disposal.
Hence IDisposable is used to deterministically clean up objects, i.e. now. It doesn't collect the object's memory (that still belongs to GC) - but is used for example to close files, database connections, etc.
There are lots of previous topics on this:
deterministic finalization
disposing objects
using block
resources
Finally, note that it is not uncommon for an IDisposable object to also have a finalizer; in this case, Dispose() usually calls GC.SuppressFinalize(this), meaning that GC doesn't run the finalizer - it simply throws the memory away (much cheaper). The finalizer still runs if you forget to Dispose() the object.
The role of the Finalize() method is to ensure that a .NET object can clean up unmanaged resources when garbage collected. However, objects such as database connections or file handlers should be released as soon as possible, instead on relying on garbage collection. For that you should implement IDisposable interface, and release your resources in the Dispose() method.
The only thing that should be in a C# destructor is this line:
Dispose(False);
That's it. Nothing else should ever be in that method.
There is a very good description on MSDN:
The primary use of this interface is
to release unmanaged resources.
The garbage collector automatically
releases the memory allocated to a
managed object when that object is no
longer used. However, it is not
possible to predict when garbage
collection will occur. Furthermore,
the garbage collector has no
knowledge of unmanaged resources
such as window handles, or open
files and streams.
Use the Dispose method of this
interface to explicitly release
unmanaged resources in conjunction
with the garbage collector. The
consumer of an object can call this method when the object is no
longer needed.
Your question regarding whether or not you should always call Dispose is usually a heated debate. See this blog for an interesting perspective from respected individuals in the .NET community.
Personally, I think Jeffrey Richter's position that calling Dispose is not mandatory is incredibly weak. He gives two examples to justify his opinion.
In the first example he says calling Dispose on Windows Forms controls is tedious and unnecessary in mainstream scenarios. However, he fails to mention that Dispose actually is called automatically by control containers in those mainstream scenarios.
In the second example he states that a developer may incorrectly assume that the instance from IAsyncResult.WaitHandle should be aggressively disposed without realizing that the property lazily initializes the wait handle resulting in an unnecessary performance penalty. But, the problem with this example is that the IAsyncResult itself does not adhere to Microsoft's own published guidelines for dealing with IDisposable objects. That is if a class holds a reference to an IDisposable type then the class itself should implement IDisposable. If IAsyncResult followed that rule then its own Dispose method could make the decision regarding which of its constituent members needs disposing.
So unless someone has a more compelling argument I am going to stay in the "always call Dispose" camp with the understanding that there are going to be some fringe cases that arise mostly out of poor design choices.
It's pretty simple really. I know it's been answered but I'll try again but will try to keep it as simple as possible.
A destructor should generally never be used. It is only run .net wants it to run. It will only run after a garbage collectoin cycle. It may never actually be run during the lifecycle of your application. For this reason, you should not ever put any code in a destructor that 'must' be run. You also can't rely on any existing objects within the class to exist when it runs (they may have already been cleaned up as the order in which destructors run in is not garanteed).
IDisposible should be used whenever you have an object that creates resources that need cleaning up (ie, file and graphics handles). In fact, many argue that anything you put in a destructor should be putin IDisposable due to the reasons listed above.
Most classes will call dispose when the finalizer is executed but this is simply there as a safe guard and should never be relied upon. You should explicitly dispose anything that implements IDisposable when you're done with it. If you do implement IDisposable, you should call dispose in finalizer. See http://msdn.microsoft.com/en-us/library/system.idisposable.aspx for an example.
Here is another fine article which clears up some of the mist surrounding IDisposable, the GC and dispose.
Chris Lyons WebLog Demystifying Dispose