I've got a C# class with a Dispose function via IDisposable. It's intended to be used inside a using block so the expensive resource it handles can be released right away.
The problem is that a bug occurred when an exception was thrown before Dispose was called, and the programmer neglected to use using or finally.
In C++, I never had to worry about this. The call to a class's destructor would be automatically inserted at the end of the object's scope. The only way to avoid that happening would be to use the new operator and hold the object behind a pointer, but that required extra work for the programmer isn't something they would do by accident, like forgetting to use using.
Is there any way to for a using block to be automatically used in C#?
Many thanks.
UPDATE:
I'd like to explain why I'm not accepting the finalizer answers. Those answers are technically correct in themselves, but they are not C++ style destructors.
Here's the bug I found, reduced to the essentials...
try
{
PleaseDisposeMe a = new PleaseDisposeMe();
throw new Exception();
a.Dispose();
}
catch (Exception ex)
{
Log(ex);
}
// This next call will throw a time-out exception unless the GC
// runs a.Dispose in time.
PleaseDisposeMe b = new PleaseDisposeMe();
Using FXCop is an excellent suggestion, but if that's my only answer, my question would have to become a plea to the C# people, or use C++. Twenty nested using statements anyone?
Where I work we use the following guidelines:
Each IDisposable class must have a finalizer
Whenever using an IDisposable object, it must be used inside a "using" block. The only exception is if the object is a member of another class, in which case the containing class must be IDisposable and must call the member's 'Dispose' method in its own implementation of 'Dispose'. This means 'Dispose' should never be called by the developer except for inside another 'Dispose' method, eliminating the bug described in the question.
The code in each Finalizer must begin with a warning/error log notifying us that the finalizer has been called. This way you have an extremely good chance of spotting such bugs as described above before releasing the code, plus it might be a hint for bugs occuring in your system.
To make our lives easier, we also have a SafeDispose method in our infrastructure, which calls the the Dispose method of its argument within a try-catch block (with error logging), just in case (although Dispose methods are not supposed to throw exceptions).
See also: Chris Lyon's suggestions regarding IDisposable
Edit:
#Quarrelsome: One thing you ought to do is call GC.SuppressFinalize inside 'Dispose', so that if the object was disposed, it wouldn't be "re-disposed".
It is also usually advisable to hold a flag indicating whether the object has already been disposed or not. The follwoing pattern is usually pretty good:
class MyDisposable: IDisposable {
public void Dispose() {
lock(this) {
if (disposed) {
return;
}
disposed = true;
}
GC.SuppressFinalize(this);
// Do actual disposing here ...
}
private bool disposed = false;
}
Of course, locking is not always necessary, but if you're not sure if your class would be used in a multi-threaded environment or not, it is advisable to keep it.
Unfortunately there isn't any way to do this directly in the code. If this is an issue in house, there are various code analysis solutions that could catch these sort of problems. Have you looked into FxCop? I think that this will catch these situations and in all cases where IDisposable objects might be left hanging. If it is a component that people are using outside of your organization and you can't require FxCop, then documentation is really your only recourse :).
Edit: In the case of finalizers, this doesn't really guarantee when the finalization will happen. So this may be a solution for you but it depends on the situation.
#Quarrelsome
If will get called when the object is moved out of scope and is tidied by the garbage collector.
This statement is misleading and how I read it incorrect: There is absolutely no guarantee when the finalizer will be called. You are absolutely correct that billpg should implement a finalizer; however it will not be called automaticly when the object goes out of scope like he wants. Evidence, the first bullet point under Finalize operations have the following limitations.
In fact Microsoft gave a grant to Chris Sells to create an implementation of .NET that used reference counting instead of garbage collection Link. As it turned out there was a considerable performance hit.
~ClassName()
{
}
EDIT (bold):
If will get called when the object is moved out of scope and is tidied by the garbage collector however this is not deterministic and is not guaranteed to happen at any particular time.
This is called a Finalizer. All objects with a finaliser get put on a special finalise queue by the garbage collector where the finalise method is invoked on them (so it's technically a performance hit to declare empty finalisers).
The "accepted" dispose pattern as per the Framework Guidelines is as follows with unmanaged resources:
public class DisposableFinalisableClass : IDisposable
{
~DisposableFinalisableClass()
{
Dispose(false);
}
public void Dispose()
{
Dispose(true);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
// tidy managed resources
}
// tidy unmanaged resources
}
}
So the above means that if someone calls Dispose the unmanaged resources are tidied. However in the case of someone forgetting to call Dispose or an exception preventing Dispose from being called the unmanaged resources will still be tidied away, only slightly later on when the GC gets its grubby mitts on it (which includes the application closing down or unexpectedly ending).
The best practice is to use a finaliser in your class and always use using blocks.
There isn't really a direct equivalent though, finalisers look like C destructors, but behave differently.
You're supposed to nest using blocks, that's why the C# code layout defaults to putting them on the same line...
using (SqlConnection con = new SqlConnection("DB con str") )
using (SqlCommand com = new SqlCommand( con, "sql query") )
{
//now code is indented one level
//technically we're nested twice
}
When you're not using using you can just do what it does under the hood anyway:
PleaseDisposeMe a;
try
{
a = new PleaseDisposeMe();
throw new Exception();
}
catch (Exception ex) { Log(ex); }
finally {
//this always executes, even with the exception
a.Dispose();
}
With managed code C# is very very good at looking after its own memory, even when stuff is poorly disposed. If you're dealing with unmanaged resources a lot it's not so strong.
This is no different from a programmer forgetting to use delete in C++, except that at least here the garbage collector will still eventually catch up with it.
And you never need to use IDisposable if the only resource you're worried about is memory. The framework will handle that on it's own. IDisposable is only for unmanaged resources like database connections, filestreams, sockets, and the like.
A better design is to make this class release the expensive resource on its own, before its disposed.
For example, If its a database connection, only connect when needed and release immediately, long before the actual class gets disposed.
Related
Until recently it never really bothered me as to how to best declare and dispose of a local variable but I thought I'd ask once and for all and get some feedback as it's starting to bug me more and more these days.
When creating a function/method that creates a local object, which method is best to create and dispose of the object.
For simplicity sake, assume that the method of the object being called will never generate an exception i.e. ConvertThisToString
private string myFirstFunction()
{
MyDataType myObject = null;
try
{
myObject = new MyDataType();
return myOjbect.ConvertThisToString();
}
finally
{
myObject = null;
}
}
or
private string mySecondFunction()
{
MyDataType myObject = new MyDataType();
return myOjbect.ConvertThisToString();
}
Are both functions ok and is it just about coding preferences or is there one method that's better than the other? Why?
My opinion is that one always requires the try/catch in order to nullify the object, which might be an overkill of try/catch for nullifying's sake, while the other method doesn't call any explicit way to destroy the object, which might be to reliant on .NET GC to release it from memory.
Should I be using the "using" statement instead?
Well, this is probably an incorrect statement. Is a local object immediately destroyed and disposed of when leaving a function or will it be cleared at a later stage by the GC management or other.
Thanks for feedback.
Thierry
UPDATED:
Removed the catch block as it caused confusion in my question. Should haven't been there in the first place since I did say, no error would ever occur.
That's very wrong.
Don't swallow exceptions.
Assigning a variable to null at the end of its scope will not help the GC at all.
If your object actually has expensive resources, it should implement IDisposable (correctly!), and you should dispose it using a using statement (but only when you're finished with it!)
You don't need to assign to null. When the object leaves scope, it will automatically be eligible for GC. There is no need to do anything special.
Basically, just write the second, simple version.
Should I be using the "using" statement instead?
If your object is wrapping resources (not memory allocated via new ..., but native resources) and implements IDisposable, then yes, you should use the using statement to guarantee those are cleaned up.
Is a local object immediately destroyed and disposed of when leaving a function or will it be cleared at a later stage by the GC management or other.
It will become eligible to be collected. At some point in the future, the GC will clean it up, but the time when this happens in indeterminant.
The best approach here is the Using statement.
something like this:
private string myFirstFunction()
{
using(MyDataType myObject = new MyDataType())
{
return myObject.ConvertThisToString();
}
}
this will dispose the object after execution.
As with almost everything, it all depends on what object you are using. If the object you are creating implements IDisposable then you would be best served to place in a using (typically). Outside of that most objects will get cleaned up by the garbage collector. IF you are the producer of a class that accesses COM objects then as the producer you should have provided a away for a proper cleanup, e.g. implement the IDisposable interface and handle the Dispose() correctly. As others have commented swallowing exceptions or even try/catching EVERY method doesn't seem like a reasonable or good idea. If a call you are making has the potential of throwing an exception and you have unmanaged or leaky objects then you should handle via a try/finally or a using (again, if appropriate).
You don't need to use a try/catch/finally to set a variable to null. The .NET GC will clear up any unreferenced classes when a method ends in it's own time.
If you are doing something really intensive (more so than your sample shows) then you can set the variable reference to null as a pointer to the GC that this is no longer referenced. This will only make a difference to your program if you are doing something which ends up marking the variable reference for Gen2 collection and then you do more stuff which prevents the GC collecting your variable (because the scope has not been left - in this case the method).
This is a bit of an extreme case as the .NET GC is designed to remove this aspect of programming from your daily concern, and regardless, it should get cleared up when the scope ends - it might just a bit longer.
You use 'using' if the referenced object implements IDisposable, and you do this typically to release unmanaged resources - though there may be other reasons why a class implements this interface.
Your first method is total overkill... it should just be as described in MySecondFunction(), and just don't swallow exceptions (I'm referring to the empty catch block) like that - because it leads to buggy, unmaintanable code! :)
Using C#.NET 4.0
My company's application makes use of a resource locker to keep records from being edited simultaneously. We use the database to store the start time of a lock as well as the user who acquired the lock. This has led to the following (strange?) implementation of dispose on the resource locker, which happens to be called from a destructor:
protected virtual void Dispose(bool disposing)
{
lock (this)
{
if (lockid.HasValue)
{
this.RefreshDataButtonAction = null;
this.ReadOnlyButtonAction = null;
try
{
**Dictionary<string, object> parameters = new Dictionary<string, object>();
parameters.Add("#lockID", lockid.Value);
parameters.Add("#readsToDelete", null);
Object returnObject = dbio2.ExecuteScalar("usp_DeleteResourceLockReads", parameters);**
lockid = null;
}
catch (Exception ex)
{
Logger.WriteError("ResourceLockingController", "DeleteResourceLocks", ex);
}
finally
{
((IDisposable)_staleResourcesForm).Dispose();
_staleResourcesForm = null;
}
}
}
}
I am concerned about the bolded section we because have been logging strange "Handle is not initialized" exceptions from the database call. I read elsewhere that it is not safe to create new objects during Finalize(), but does the same rule apply to dispose()? Are there any possible side effects that accompany creating new objects during Dispose()?
which happens to be called from a destructor
That's the real problem. You cannot assume that the *dbio2" object hasn't been finalized itself. Finalization order is not deterministic in .NET. The outcome would look much like you describe, an internal handle used by the dbase provider will have been released so a "Handle is not initialized" exception is expected. Or the dbio2 object was simply already disposed.
This is especially likely to go wrong at program exit. You'll then also have problem when the 2 second timeout for the finalizer thread, a dbase operation can easily take more.
You simply cannot rely on a finalizer to do this for you. You must check the disposing argument and not call the dbio2.ExecuteScalar() method when it is false. Which probably ends the usefulness of the destructor as well.
Dispose is just a method, like any other method. There are some conventions about things that it should/shouldn't do, but there's nothing from the system's perspective that is wrong with creating objects in a Dispose call.
Making a DB call is a bit concerning to be personally; I wouldn't expect such an expensive and error prone activity to be called in a Dispose method, but that's more of a convention/expectation. The system won't have a problem with that.
Yes, but I would not do so unless the object created is within the local scope of the method. IDisposable is an advertisement that this class has some resource (often an unmanaged resource) that should be freed when the object is no longer used. If your Dispose is being called by your finializer (ie you are not calling the destructor directly, but waiting for the GC to do it) it can be an indication that you should be calling it earlier. You never know when the C# destructor will run, so you may be unnecessarily tying up that resource. It could also be in indication that your class doesn't need to implement IDisposable.
In your case your are using object dbio2 which I assume represents your DB connection. However, since this is called from the destructor how do you know if your connection is still valid? You destructor could an hour after your connection has been lost. You should try to ensure that this Dispose is called while you know the dbio2 object is still in scope.
Regarding the Microsoft built classes that inherit IDisposable, do I explicitly have to call Dispose to prevent memory leaks?
I understand that it is best practice to call Dispose (or better yet use a using block), however when programming, typically I don't always immediately realise that a class inherits from IDisposable.
I also understand that Microsoft implementation of IDisposable is a bit borked, which is why they created the article explaining the correct usage of IDisposable.
Long story short, in which instances is it okay to forget to call Dispose?
There are a couple of issues in the primary question
Do I explicitly have to call Dispose to prevent memory leaks?
Calling Dispose on any type which implements IDisposable is highly recomended and may even be a fundamental part of the types contract. There is almost no good reason to not call Dispose when you are done with the object. An IDisposable object is meant to be disposed.
But will failing to call Dispose create a memory leak? Possibly. It's very dependent on what exactly that object does in it's Dispose method. Many free memory, some unhook from events, others free handles, etc ... It may not leak memory but it will almost certainly have a negative effect on your program
In which instances is it okay to forget to call Dispose?
I'd start with none. The vast majority of objects out there implement IDisposable for good reason. Failing to call Dispose will hurt your program.
It depends on two things:
What happens in the Dispose method
Does the finalizer call Dispose
Dispose functionlity
Dispose can do several type of actions, like closing a handle to a resource (like file stream), change the class state and release other components the class itself uses.
In case of resource being released (like file) there's a functionality difference between calling it explicitly and waiting for it to be called during garbage collection (assuming the finalizer calls dispose).
In case there's no state change and only components are released there'll be no memory leak since the object will be freed by the GC later.
Finalizer
In most cases, disposable types call the Dispose method from the finalizer. If this is the case, and assuming the context in which the dispose is called doesn't matter, then there's a high chance that you'll notice no difference if the object will not be disposed explicitly. But, if the Dispose is not called from the finalizer then your code will behave differently.
Bottom line - in most cases, it's better to dispose the object explicitly when you're done with it.
A simple example to where it's better to call Dispose explicitly: Assuming you're using a FileStream to write some content and enable no sharing, then the file is locked by the process until the GC will get the object. The file may also not flush all the content to the file so if the process crashes in some point after the write was over it's not guaranteed that it will actually be saved.
It can be safe to not call Dispose, but the problem is knowing when this is the case.
A good 95% of IEnumerator<T> implementations have a Dispose that's safe to ignore, but the 5% is not just 5% that'll cause a bug, but 5% that'll cause a nasty hard to trace bug. More to the point, code that gets passed an IEnumerator<T> will see both the 95% and the 5% and won't be able to dynamically tell them apart (it's possible to implement the non-generic IEnumerable without implementing IDisposable, and how well that turned out can be guessed at by MS deciding to make IEnumerator<T> inherit from IDisposable!).
Of the rest, maybe there's 3 or 4% of the time it's safe. For now. You don't know which 3% without looking at the code, and even then the contract says you have to call it, so the developer can depend on you doing so if they release a new version where it is important.
In summary, always call Dispose(). (I can think of an exception, but it's frankly too weird to even go into the details of, and it's still safe to call it in that case, just not vital).
On the question of implementing IDisposable yourself, avoid the pattern in that accursed document.
I consider that pattern an anti-pattern. It is a good pattern for implementing both IDisposable.Dispose and a finaliser in a class that holds both managed and unmanaged resources. However holding both managed IDisposable and unmanaged resources is a bad idea in the first place.
Instead:
If you have an unmanaged resource, then don't have any unmanaged resources that implement IDisposable. Now the Dispose(true) and Dispose(false) code paths are the same, so really they can become:
public class HasUnmanaged : IDisposable
{
IntPtr unmanagedGoo;
private void CleanUp()
{
if(unmanagedGoo != IntPtr.Zero)
{
SomeReleasingMethod(unmanagedGoo);
unmanagedGoo = IntPtr.Zero;
}
}
public void Dispose()
{
CleanUp();
GC.SuppressFinalize(this);
}
~HasUnmanaged()
{
CleanUp();
}
}
If you have managed resources that need to be disposed, then just do that:
public class HasUnmanaged : IDisposable
{
IDisposable managedGoo;
public void Dispose()
{
if(managedGoo != null)
managedGoo.Dispose();
}
}
There, no cryptic "disposing" bool (how can something be called Dispose and take false for something called disposing?) No worrying about finalisers for the 99.99% of the time you won't need them (the second pattern is way more common than the first). All good.
Really need something that has both a managed and an unmanaged resource? No, you don't really, wrap the unmanaged resource in a class of your own that works as a handle to it, and then that handle fits the first pattern above and the main class fits the second.
Only implement the CA10634 pattern when you're forced to because you inherited from a class that did so. Thankfully, most people aren't creating new ones like that any more.
It is never OK to forget to call Dispose (or, as you say, better yet use using).
I guess if the goal of your program is to cause unmanaged resource leaks. Then maybe it would be OK.
The implementation of IDisposable indicates that a class uses un-managed resources. You should always call Dispose() (or use a using block when possible) when you're sure you're done with the class. Otherwise you are unnecessarily keeping un-managed resources allocated.
In other words, never forget to call Dispose().
Yes, always call dispose. Either explicitly or implicitly (via using). Take, for example, the Timer class. If you do not explicitly stop a timer, and do not dispose it, then it will keep firing until the garbage collector gets around to collecting it. This could actually cause crashes or unexpected behavior.
It's always best to make sure Dispose is called as soon as you are done with it.
Microsoft (probably not officially) says it is ok to not call Dispose in some cases.
Stephen Toub from Microsoft writes (about calling Dispose on Task):
In short, as is typically the case in .NET, dispose aggressively if
it's easy and correct to do based on the structure of your code. If
you start having to do strange gyrations in order to Dispose (or in
the case of Tasks, use additional synchronization to ensure it's safe
to dispose, since Dispose may only be used once a task has completed),
it's likely better to rely on finalization to take care of things. In
the end, it's best to measure, measure, measure to see if you actually
have a problem before you go out of your way to make the code less
sightly in order to implement clean-up functionality.
[bold emphasize is mine]
Another case is base streams
var inner = new FileStrem(...);
var outer = new StreamReader(inner, Encoding.GetEncoding(1252));
...
outer.Dispose();
inner.Dispose(); -- this will trigger a FxCop performance warning about calling Dispose twice.
(I have turned off this rule)
I was reading about this scenario where making use of the C# using statement can cause problems. Exceptions thrown within the scope of the using block can be lost if the Dispose function called at the end of the using statement was to throw an exception also. This highlights that care should be taken in certain cases when deciding whether to add the using statement.
I only tend to make use of using statements when using streams and the classes derived from DbConnection. If I need to clean up unmanaged resources, I would normally prefer to use a finally block.
This is another use of the IDisposable interface to create a performance timer that will stop the timer and log the time to the registry within the Dispose function.
http://thebuildingcoder.typepad.com/blog/2010/03/performance-profiling.html
Is this good use of the IDisposable interface? It is not cleaning up resources or disposing of any further objects. However, I can see how it could clean up the calling code by wrapping the code it is profiling neatly in a using statement.
Are there times when the using statement and the IDisposable interface should never be used? Has implementing IDisposable or wrapping code in a using statement caused problems for you before?
Thanks
I would say, always use using unless the documentation tells you not to (as in your example).
Having a Dispose method throw exceptions rather defeats the point of using it (pun intended). Whenever I implement it, I always try to ensure that no exceptions will be thrown out of it regardless of what state the object is in.
PS: Here's a simple utility method to compensate for WCF's behaviour. This ensures that Abort is called in every execution path other than when Close is called, and that errors are propogated up to the caller.
public static void CallSafely<T>(ChannelFactory<T> factory, Action<T> action) where T : class {
var client = (IClientChannel) factory.CreateChannel();
bool success = false;
try {
action((T) client);
client.Close();
success = true;
} finally {
if(!success) {
client.Abort();
}
}
}
If you find any other funny behaviour cases elsewhere in the framework, you can come up with a similar strategy for handling them.
The general rule of thumb is simple: when a class implements IDisposable, use using. When you need to catch errors, use try/catch/finally, to be able to catch the errors.
A few observations, however.
You ask whether situations exist where IDisposable should not be used. Well: in most situations you shouldn't need to implement it. Use it when you want to free up resources timely, as opposed to waiting until the finalizer kicks in.
When IDisposable is implemented, it should mean that the corresponding Dispose method clears its own resources and loops through any referenced or owned objects and calls Dispose on them. It should also flag whether Dispose is called already, to prevent multiple cleanups or referenced objects to do the same, resulting in an endless loop. However, all this is no guarantee that all references to the current object are gone, which means it will remain in memory until all references are gone and the finalizer kicks in.
Throwing exceptions in Dispose is frowned upon and when it happens, state is possibly not guaranteed anymore. A nasty situation to be in. You can fix it by using try/catch/finally and in the finally block, add another try/catch. But like I said: this gets ugly pretty quickly.
Using using is one thing, but don't confuse it with using try/finally. Both are equal, but the using-statement makes life easier by adding scoping and null-checks, which is a pain to do by hand each time. The using-statement translates to this (from C# standard):
{
SomeType withDispose = new SomeType();
try
{
// use withDispose
}
finally
{
if (withDispose != null)
{
((IDisposable)withDispose).Dispose();
}
}
}
There are occasions where wrapping an object into a using-block is not necessary. These occasions are rare. They happen when you find yourself inheriting from an interface that inherits from IDisposable just in case a child would require disposing. An often-used example is IComponent, which is used with every Control (Form, EditBox, UserControl, you name it). And I rarely see people wrapping all these controls in using-statements. Another famous example is IEnumerator<T>. When using its descendants, one rarely sees using-blocks either.
Conclusion
Use the using-statement ubiquitously, and be judicious about alternatives or leaving it out. Make sure you know the implications of (not) using it, and be aware of the equality of using and try/finally. Need to catch anything? Use try/catch/finally.
I think the bigger problem is throwing exceptions in Dispose. RAII patterns usually explicitly state that such should not be done, as it can create situations just like this one. I mean, what is the recovery path for something not disposing correctly, other than simply ending execution?
Also, it seems like this can be avoided with two try-catch statements:
try
{
using(...)
{
try
{
// Do stuff
}
catch(NonDisposeException e)
{
}
}
}
catch(DisposeException e)
{
}
The only problem that can occur here is if DisposeException is the same or a supertype of NonDisposeException, and you are trying to rethrow out of the NonDisposeException catch. In that case, the DisposeException block will catch it. So you might need some additional boolean marker to check for this.
The only case I know about is WCF clients. That's due to a design bug in WCF - Dispose should never throw exceptions. They missed that one.
One example is the IAsyncResult.AsyncWaitHandle property. The astute programmer will recognize that WaitHandle classes implement IDisposable and naturally try to greedily dispose them. Except that most of the implementations of the APM in the BCL actually do lazy initialization of the WaitHandle inside the property. Obviously the result is that the programmer did more work than was necessary.
So where is the breakdown? Well, Microsoft screwed up the IAsyncResult inteface. Had they followed their own advice IAsyncResult would have derived from IDisposable since the implication is that it holds disposable resources. The astute programmer would then just call Dispose on the IAsyncResult and let it decide how best to dispose its constituents.
This is one of the classic fringe cases where disposing of an IDisposable could be problematic. Jeffrey Richter actually uses this example to argue (incorrectly in my opinion) that calling Dispose is not mandatory. You can read the debate here.
I've been debugging some code recently that was a bit memory leaky. It's a long running program that runs as a Windows service.
If you find a class wearing an IDisposable interface, it is telling you that some of the resources it uses are outside the abilities of the garbage collector to clean up for you.
The reason it is telling you this is that you, the user of this object, are now responsible for when these resources are cleaned up. Congratulations!
As a conscientious developer, you are nudged towards calling the .Dispose() method when you've finished with the object in order to release those unmanaged resources.
There is the nice using() pattern to help clean up these resources once they are finished with. Which just leaves finding which exact objects are causing the leakyness?
In order to aid tracking down these rogue unmanaged resources, is there any way to query what objects are loitering around waiting to be Disposed at any given point in time?
There shouldn't be any cases where you don't want to call Dispose, but the compiler cannot tell you where you should call dispose.
Suppose you write a factory class which creates and returns disposable objects. Should the compiler bug you for not calling Dispose when the cleanup should be the responsibility of your callers?
IDisposable is more for making use of the using keyword. It's not there to force you to call Dispose() - it's there to enable you to call it in a slick, non-obtrusive way:
class A : IDisposable {}
/// stuff
using(var a = new A()) {
a.method1();
}
after you leave the using block, Dispose() is called for you.
"Is there any way to detect at the end of the program which objects are loitering around waiting to be Disposed?"
Well, if all goes well, at the end of the program the CLR will call all object's finalizers, which, if the IDisposable pattern was implemented properly, will call the Dispose() methods. So at the end, everything will be cleared up properly.
The problem is that if you have a long running program, chances are some of your IDiposable instances are locking some resources that shouldn't be locked. For cases like this, user code should use the using block or call Dispose() as soon as it is done with an object, but there's really no way for a anyone except the code author to know that.
You are not required to call the Dispose method. Implementing the IDisposable interface is a reminder that your class probably is using resources such as a database connection, a file handle, that need to be closed, so GC is not enough.
The best practice AFAIK is to call Dispose or even better, put the object in a using statement.
A good example is the .NET 2.0 Ping class, which runs asynchronously. Unless it throws an exception, you don't actually call Dispose until the callback method. Note that this example has some slightly weird casting due to the way Ping implements the IDisposable interface, but also inherits Dispose() (and only the former works as intended).
private void Refresh( Object sender, EventArgs args )
{
Ping ping = null;
try
{
ping = new Ping();
ping.PingCompleted += PingComplete;
ping.SendAsync( defaultHost, null );
}
catch ( Exception )
{
( (IDisposable)ping ).Dispose();
this.isAlive = false;
}
}
private void PingComplete( Object sender, PingCompletedEventArgs args )
{
this.isAlive = ( args.Error == null && args.Reply.Status == IPStatus.Success );
( (IDisposable)sender ).Dispose();
}
Can I ask how you're certain that it's specifically objects which implement IDisposable? In my experience the most-likely zombie objects are objects which have not properly had all their event handlers removed (thereby leaving a reference to them from another 'live' object and not qualifying them as unreachable during garbage collection).
There are tools which can help track these down by taking a snapshot of the managed heap and stacks and allowing you to see what objects are considered in-use at a given point in time. A freebie is windbg using sos.dll; it'll take some googling for tutorials to show you the commands you need--but it works and it's free. A more user-friendly (don't confused that with "simple") option is Red Gate's ANTS Profiler running in Memory Profiling mode--it's a slick tool.
Edit: Regarding the usefulness of calling Dispose--it provides a deterministic way to cleanup objects. Garbage Collection only runs when your app has ran out of its allocated memory--it's an expensive task which basically stops your application from executing and looks at all objects in existance and builds a tree of "reachable" (in-use) objects, then cleans up the unreachable objects. Manually cleaning up an object frees it before GC ever has to run.
Because the method creating the disposable object may be legitimately returning it as a value, that is, the compiler can't tell how the programming is intending to use it.
What if the disposable object is created in one class/module (say a factory) and is handed off to a different class/module to be used for a while before being disposed of? That use case should be OK, and the compiler shouldn't badger you about it. I suspect that's why there's no compile-time warning---the compiler assumes the Dispose call is in another file.
Determining when and where to call Dispose() is a very subjective thing, dependent on the nature of the program and how it uses disposable objects. Subjective problems are not something compilers are very good at. Instead, this is more a job for static analysis, which is the arena of tools like FxCop and StyleCop, or perhaps more advanced compilers like Spec#/Sing#. Static analysis uses rules to determine if subjective requirements, such as "Always ensure .Dispose() is called at some point.", are met.
I am honestly not sure if any static analyzers exist that are capable of checking whether .Dispose() is called. Even for static analysis as it exists today, that might be a bit on the too-subjective side of things. If you need a place to start looking, however, "Static Analysis for C#" is probably the best place.