Can I a implement DisposeBase abstract class? - c#

Is there a catch or hidden problem in using a DisposableBase base class instead of recoding the Dispose pattern on every class?
Why aren't everyone using such a relevant class?
Edits:
I naturally only meant classes that implement IDisposable
I know it uses up the option for inheritance, but I'm willing to pay the price (at least when I can and it doesn't hurt me otherwise).
When I can seal the class, I do - but I have some cases where I want the base of an inheritance hierarchy to be Disposable.

You don't need to implement Dispose() on every class - just those with something that needs deterministic cleanup. Re a Disposable base-class, I'm not entirely sure it provides a whole lot - IDisposable isn't a complex interface. The main time it might be useful is if you are handling unmanaged resources and want a finalizer, but even then it isn't much code.
Personally, I wouldn't bother with such a base class. In particular, inheritance (in a single-inheritance world) gets restrictive very quickly. But more to the point, overriding a method isn't much different to simply providing a public Dispose() method.
Again: you only need a finalizer etc if you are handling unmanaged objects.
If I had a lot of these (unmanaged resouces), I might see whether I could get PostSharp to do the work for me. I don't know if one already exists, but it might be possible to create an aspect that handles (in particular) the finalizer etc. Who knows...

Well, it uses up your one option for inheritance to describe a single aspect of your class - that's not ideal, IMO. It would be interesting to try to do something with composition, where you have a reference to a DisposableHelper and the implementation of IDisposable just calls helper.Dispose, which has the rest of the boilerplate logic in - and can call back to your code via a callback delegate. Hmm. Subclasses could subscribe to a protected Disposing event to register "I need to do something"... it might be worth looking at some time.
Personally I don't find myself implementing IDisposable often enough to make it an issue - and when I do, I typically seal my classes anyway, so half of the stuff in the pattern becomes a non-issue.

As Marc Gravell said, you only need a finalizer if you are handling unmanaged objects. Introducing an unnecessary finalizer in a base class is a bad idea, as per the reasons in section 1.1.4 of the Dispose, Finalization, and Resource Management guidelines:
There is a real cost associated with
instances with finalizers, both from a
performance and code complexity
standpoint. ... Finalization increases the cost and duration of
your object’s lifetime as each
finalizable object must be placed on a
special finalizer registration queue
when allocated, essentially creating
an extra pointer-sized field to refer
to your object. Moreover, objects in
this queue get walked during GC,
processed, and eventually promoted to
yet another queue that the GC uses to
execute finalizers. Increasing the
number of finalizable objects directly
correlates to more objects being
promoted to higher generations, and an
increased amount of time spent by the
GC walking queues, moving pointers
around, and executing finalizers.
Also, by keeping your object’s state
around longer, you tend to use memory
for a longer period of time, which
leads to an increase in working set.
If you use SafeHandle (and related classes), it's unlikely that any classes that derive from DisposableBase would ever need to be finalized.

Related

Just how 'disposable' is ReaderWriterLockSlim?

I principally follow the IDisposable pattern, and for most classes that is justified. But ReaderWriterLockSlim made me question the viability of applying such pattern. All ReaderWriterLockSlim.Dispose does is close some event handles. So how important is it to Dispose such class with so few resources? In this case, I really wouldn't mind if the GC had to wait another round for the finalizers of the unmanaged resources to finish.
The consequence for applying the IDisposable pattern is considerable however, every class that uses a disposable class now has to implement IDisposable too. In my particular case, I am implementing a wrapper for HashSet. I don't particularly expect the requirement to dispose such object because, accidently, it uses a synchronizer which does.
Are there any reasons not to violate the disposable pattern in this case? While I am eager to, I wouldn't do so in practice, because violating consistency is much worse.
The problem with unmanaged OS handles is that handles come from a limited supply. The GC is not aware of this.
The pure memory consumption of a handle is not that big. Nothing more than an object in kernel memory and probably hash table entry somewhere.
You are right in that it is not enough to say: "You must always dispose all disposable objects". That rule is too simple. For example the Task class does not need to be disposed. If you know what you are doing you can take a looser stance regarding disposal. Be aware that not all team members might understand this point (now you can leave a link to this answer in the source code...).
If you are sure that you will not leak a lot of handles you can safely do this. Be aware that under edge conditions (load, bugs, ...) you might leak more that you anticipated causing production issues.
If this field is static you don't need to dispose of it, it will (righty) have the same lifetime as your application. I see it's not, lets move on.
The correct way to handle an IDisposable is to dispose of it. I think we need a good reason not do this.
Use another lock:
I think the best thing to do is to use Monitor or another lock, which will have the bonus of simplifying your code as well. ConcurrentDictionary and other framework classes seem to take this approach.
You are worried about lock conveys, but I'm not sure this is even solved by ReaderWriterLockSlim, the only real solution is to hold less locks and hold them for less time.
Don't dispose:
This needs a justification. Can you demonstrate needed performance benefits here?
If you have a few of these objects that are long lived, fine, not all disposables are equally weighty (it's not like your leaving a word document open), you will probably get away with it. As it has been pointed out, what is the point of disposing of all this milliseconds before the application closes anyway. I believe the destructor of an IDisposable is in meant to handle situations where the object is not disposed, although you can't be sure when or even if this is called.
If you have a long lived applciation with lots of short-lived usages of this class, however, you may run into trouble. You are baking in your assumptions about the use of your code, just be aware.

Should I use a finalize method with an IDisposable class containing a TransactionScope?

I've written a class which pairs up a TransactionScope with an Linq to Sql DataContext.
It implements the same methods as the TransactionScope, Dispose() and Complete() and exposes the DataContext.
It's purpose is to ensure that DataContexts are not re-used, they are paired up with a single transaction and Disposed along with it.
Should I include a Finalize method in the class? One that calls Dispose if it has not already been called? Or it that only for IDisposables that reference unmanaged resources?
No, never implement a finaliser in a class that is disposable just because it wraps a disposable class.
Consider that you have three clean-up scenarios for a class with Dispose and a finaliser:
Dispose() is called.
The finaliser is called on application shutdown.
The object was going to be collected, but the finaliser hadn't been suppressed (most often from a call to Dispose(), but note that you should always suppress your finaliser when anything puts it in a state where it doesn't need to be cleaned up, and re-registered if it is put in a state where it does need it - e.g. if you had an Open()/Close() pair of methods).
Now, if you are directly managing an unmanaged resource (e.g. a handle through an IntPtr), these three of these scenarios where you will have one of the two clean-up methods called directly match the three scenarios where you need clean-up to happen.
Okay. So, let's consider a disposable wrapping a disposable where the "outer" class has a finaliser implemented correctly:
~MyClass()
{
// This space deliberately left blank.
}
The finaliser doesn't do anything, because there's no unmanaged clean-up for it to handle. The only effect is that if this null finaliser hasn't been suppressed, then upon garbage collection it will be put in the finaliser queue - keeping it and anything only reachable through it's fields alive and promoting them to the next generation - and eventually the finalisation thread will call this nop method, mark it as having been finalised and it becomes eligible for garbage collection again. But since it was promoted it'll be Gen 1 if it had been Gen 0, and Gen 2 if it had been Gen 1.
The actual object that did need to be finalised will also be promoted, and it'll have to wait that bit longer not just for collection, but also for finalisation. It's going to end up in Gen 2 no matter what.
Okay, that's bad enough, let's say we actually put some code in the finaliser that did something with the field that holds the finalisable class.
Wait. What are we going to do? We can't call a finaliser directly, so we dispose it. Oh wait, are we sure that for this class the behaviour of Dispose() and that of the finaliser is close enough that it's safe? How do we know it doesn't hold onto some resources via weak-references that it will try to deal with in the dispose method and not in the finaliser? The author of that class knows all about the perils of dealing with weak-references from within a finaliser, but they thought they were writing a Dispose() method, not part of someone else's finaliser method.
And then, what if the outer finaliser was called during application shut-down. How do you know the inner finaliser wasn't already called? When the author of the class was writing their Dispose() method, are they sure to ponder "okay, now let's make sure I handle the case of this being called after the finaliser has already run and the only thing left for this object to do is have it's memory freed?" Not really. It might be that they guarded against repeated calls to Dispose() in such a way that also protects from this scenario, but you can't really bet on it (especially since they won't be helping that in the finaliser which they know will be the last method ever called and any other sort of cleanup like nulling fields that won't be used again to flag them as such, is pointless). It could end up doing something like dropping the reference count of some reference-counted resource, or otherwise violating the contract of the unmanaged code it is its job to deal with.
So. Best case scenario with a finaliser in such a class is that you damage the efficiency of garbage collection, and worse-case is you have a bug which interfere with the perfectly good clean-up code you were trying to help.
Note also the logic behind the pattern MS used to promote (and still have in some of their classes) where you have a protected Dispose(bool disposing) method. Now, I have a lot of bad things to say about this pattern, but when you look at it, it is designed to deal with the very fact that what you clean up with Dispose() and what you clean up in a finaliser are not the same - the pattern means that an object's directly-held unmanaged resources will be cleaned up in both cases (in the scenario of your question, there are no such resources) and that managed resources such as an internally-held IDisposable object are cleaned-up only from Dispose(), and not from a finaliser.
Implement IDisposable.Dispose() if you have anything that needs to be cleaned up, whether an unmanaged resource, an object that is IDisposable or anything else.
Write a finaliser if, and only if you directly have an unmanaged resource that needs to be cleaned up, and make cleaning it up the only thing you do there.
For bonus points, avoid being in both classes at once - wrap all unmanaged resources in disposable and finalisable classes that only deal with that unmanaged classes, and if you need to combine that functionality with other resources do it in a disposable-only class that uses such classes. That way clean-up will be clearer, simpler, less prone to bugs, and less damaging to GC efficiency (no risk of finalisation of one object delaying that of another).
A Finalizer is meant solely for cleaning up unmanaged resources. There is no use in calling the dispose of dependent object inside the finalizer, since if those objects manage critical resources, they have a finalizer by them selves.
In .NET 2.0 and up there is even less reason to implement a finalizer, no .NET contains the SafeHandle class.
However, one reason I sometimes find to still implement a finalizer, is to find out whether developers forgot to call Dispose. I let this class implement the finalizer only in the debug build and let it write to the Debug window.
There is no simple answer for this one - it is debatable.
What?
The debate is whether to use finaliser with a full Disposable pattern. Your code use transactions and database context - those guys (usually) use unmanaged resources (like kernel transaction objects and TCP/IP connections)
Why?
If you use any unmanaged resource that should be cleaned up, you should implement IDisposable. Then a client code can wrap a call to the class into the recommended using(IDisposable myClass = new MyClass(){...} construct. The problem is if a developer wouldn't call IDisposable.Dispose() explicitly or implicitly, then the resource wouldn't be free up automatically. Even if the object myClass has been collected by GC. This is because GC never calls Dispose during collection, it's the responsibility of finilisation queue.
Thus you can define a finiliser, that will eventually be called by GC finilisation thread, that is independent from garbage collection.
Opinions
Some people argue that you should just make sure you put all the disposable code into using (){} and forget about finilisation. After all you must release such resources ASAP and the whole finalisation process is kinda vague for many developers.
In contrast, I prefer to explicitly implement finilisator, simply because I don't know who will use my code. So if someone forgets to call a Dispose on the class that requires that, the resource will eventually be released.
Conclusion
Personally, I would recommend to implement finilisator with any class that implements IDisposable.

Correct way of implementing Finalize and Dispose(When parent class implements IDisposable)

I was implementing Finalize and Dispose in my classes, I implemented IDisposable on my parent class and override the Dispose(bool) overload in my child classes. I was not sure
whether to use a duplicate isDisposed variable(as its already there in base class) or not?
Whether to implement a finalizer in child class too or not?
Both these things are done in example given here -
http://guides.brucejmack.biz/CodeRules/FxCop/Docs/Rules/Usage/DisposeMethodsShouldCallBaseClassDispose.html
Whereas example in this MSDN article doesn't have any of these two -
http://msdn.microsoft.com/en-us/library/b1yfkh5e.aspx
whereas this example in MSDN is not complete -
http://msdn.microsoft.com/en-us/library/ms182330.aspx
It's very rare for a finalizer to be useful. The documentation you link to isn't totally helpful - it offers the following rather circular advice:
Implement Finalize only on objects
that require finalization
That's an excellent example of begging the question, but it's not very helpful.
In practice, the vast majority of the time you don't want a finalizer. (One of the learning curves .NET developers have to go through is discovering that in most of the places they think they need a finalizer, they don't.) You've tagged this as (amongst other things) a WPF question, and I'd say it'd almost always be a mistake to put a finalizer on a UI object. (So even if you are in one of the unusual situations that turns out to require a finalizer, that work doesn't belong anywhere near code that concerns itself with WPF.)
For most of the scenarios in which finalizers seem like they might be useful, they turn out not to be, because by the time your finalizer runs, it's already too late for it to do anything useful.
For example it's usually a bad idea to try to do anything with any of the objects your object has a reference to, because by the time your finalizer runs, those objects may already have been finalized. (.NET makes no guarantees about the order in which finalizers run, so you simply have no way of knowing whether the objects you've got references to have been finalized.) It's bad idea to invoke a method on an object whose finalizer has already been run.
If you have some way of knowing that some object definitely hasn't been finalized, then it is safe to use it, but that's a pretty unusual situation to be in. (...unless the object in question has no finalizer, and makes use of no finalizable resources itself. But in that case, it's probably not an object you'd actually need to do anything to when your own object is going away.)
The main situation in which finalizers seem useful is interop: e.g., suppose you're using P/Invoke to call some unmanaged API, and that API returns you a handle. Perhaps there's some other API you need to call to close that handle. Since that's all unmanaged stuff, the .NET GC doesn't know what those handles are, and it's your job to make sure that they get cleaned up, at which point a finalizer is reasonable...except in practice, it's almost always best to use a SafeHandle for that scenario.
In practice, the only places I've found myself using finalizers have been a) experiments designed to investigate what the GC does, and b) diagnostic code designed to discover something about how particular objects are being used in a system. Neither kind of code should end up going into production.
So the answer to whether you need "to implement a finalizer in child class too or not" is: if you need to ask, then the answer is no.
As for whether to duplicate the flag...other answers are providing contradictory advice here. The main points are 1) you do need to call the base Dispose and 2) your Dispose needs to be idempotent. (I.e., it doesn't matter if it's called once, twice, 5 times, 100 times - it shouldn't complain if it's called more than once.) You're at liberty to implement that however you like - a boolean flag is one way, but I've often found that it's enough to set certain fields to null in my Dispose method, at which point that removes any need for a separate boolean flag - you can tell that Dispose was already called because you already set those fields to null.
A lot of the guidance out there on IDisposable is extremely unhelpful, because it addresses the situation where you need a finalizer, but that's actually a very unusual case. It means that lots of people write a IDisposable implementations that are far more complex than necessary. In practice, most classes call into the category Stephen Cleary calls "level 1" in the article that jpierson linked to. And for these, you don't need all the GC.KeepAlive, GC.SuppressFinalize, and Dispose(bool) stuff that clutters up most of the examples. Life's actually much simpler most of the time, as Cleary's advice for these "level 1" types shows.
Duplicate is needed
If you don't have any clean-up in child class simply call base.Dispose() and if there are some class level clean-up, do it after a call to base.Dispose(). You need to separate state of these two classes so there should be a IsDisposed boolean for each class. This way you can add clean-up code whenever you need.
When you determine a class as IDisposable, you simply tell GC I'm taking care of it's clean-up procedure and you should SuppressFinilize on this class so GC would remove it from it's queue. Unless you call GC.SupressFinalize(this) nothing happens special to an IDisposable class. So if you implement it as I mentioned there's no need for a Finilizer since you just told GC not to finalize it.
The correct way to implement IDisposable depends on whether you have any unmanaged resources owned by your class. The exact way to implement IDisposable is still something not all developers agree on and some like Stephen Cleary have strong opinions on the disposable paradigm in general.
see: Implementing Finalize and Dispose to Clean Up Unmanaged Resources
The documentation for IDisposable interface also explains this breifly and this article points out some of the same things but also on MSDN.
As far as whether a duplicate boolean field "isDisposed" is required in the base class. It appears that this is mainly just a useful convention that can be used when a subclass itself may add additional unmanaged resources that need to be disposed of. Since Dispose is declared virtual, calling Dispose on a subclass instance always causes that class's Dispose method to be called first which in turn calls base.Dispose as it's last step giving a chance to clear up each level in the inheritance hierarchy. So I would probably summarize this as, if your subclass has additional unmanaged resources above what is owned by the base then you will probably be best to have your own boolean isDisposed field to track it's disposal in a transactional nature inside it's Dispose method but as Ian mentions in his answer, there are other ways to represent an already-disposed state.
1) No need to duplicate
2) Implementing a finalizer will help to dispose items that are not explicitly disposed. But is not guaranteed. It is a good practice to do.
Only implement a finalizer if an object holds information about stuff needing cleanup, and this information is in some form other than Object references to other objects needing cleanup (e.g. a file handle stored as an Int32). If a class implements a finalizer, it should not hold strong Object references to any other objects which are not required for cleanup. If it would hold other references, the portion responsible for cleanup should be split off into its own object with a finalizer, and the main object should hold a reference to that. The main object should then not have a finalizer.
Derived classes should only have finalizers if the purpose of the base class was to support one. If the purpose of a class doesn't center around a finalizer, there's not much point allowing a derived class to add one, since derived classes almost certainly shouldn't (even if they need to add unmanaged resources, they should put the resources in their own class and just hold a reference to it).

What's the purpose of implementing the IDisposable interface?

What's the purpose of implementing the IDisposable interface? I've seen some classes implementing it and I don't understand why.
If your class creates unmanaged resources, then you can implement IDisposable so that these resources will be cleaned up properly when the object is disposed of. You override Dispose and release them there.
When your classes makes use of some system resource, it's the class' responsibility to make sure the resource is freed too. By .Net design you're supposed to do that in the Dispose method of the class. The IDisposable interface marks that your class needs to free resource when it's no longer in use, and the Dispose method is made available so that users of your class can call it to free the consumed resources.
The IDisposable method is also essential if you want auto clean-up to work properly and want to use the using() statement.
As well as freeing unmanaged resources, objects can usefully perform some operation the moment they go out of scope. A useful example might be an timer object: such objects could print out the time elapsed since their construction in the Dispose() method. These objects could then be used to log the approximate time taken for some set of operations:
using(Timer tmr=new Timer("blah"))
{
// do whatever
}
This can be done manually, of course, but my feeling is that one should take advantage wherever possible of the compiler's ability to generate the right code automatically.
It all has to do with the garbage collection mechanism. Chris Sells describes garbage collection, finalizers, and the reason for the Dispose pattern (and the IDisposable interface) episode 10 of .NET Rocks! (starting about 34 minutes in).
Many objects manipulate other entities in ways that will cause problems if not cleaned up. These other entities may be almost anything, and they may be almost anywhere. As an example, a Socket object may ask another machine to open up a TCP connection. That other machine might not be capable of handling very many connections at once; indeed, it could be a web-equipped appliance that can only handle one connection at a time. If a program were to open a socket and simply forget about it, no other computer would be able to connect to the appliance unless or until the socket got closed (perhaps the appliance might close the socket itself after a few minutes of inactivity, but it would be useless until then).
If an object implements IDisposable, that means it has the knowledge and impetus required to perform necessary cleanup actions, and such actions need to be performed before such knowledge and impetus is lost. Calling IDisposable.Dispose will ensure that all such cleanup actions get carried out, whereupon the object may be safely abandoned.
Microsoft allows for objects to request protection from abandonment by registering a method called Finalize. If an object does so, the Finalize method will be called if the system detects that the object has been abandoned. Neither the object, nor any objects to which it holds direct or indirect references, will be erased from memory until the Finalize method has been given a chance to run. This provides something of a "backstop" in case an object is abandoned without being first Disposed. There are many traps, however, with objects that implement Finalize, since there's no guarantee as to when it will be called. Not only might an object be abandoned a long time before Finalize gets called, but if one isn't careful the system may actually call Finalize on an object while part of it is still in use. Dangerous stuff. It's far better to use Dispose properly.

What's the point of overriding Dispose(bool disposing) in .NET?

If I write a class in C# that implements IDisposable, why isn't is sufficient for me to simply implement
public void Dispose(){ ... }
to handle freeing any unmanaged resources?
Is
protected virtual void Dispose(bool disposing){ ... }
always necessary, sometimes necessary, or something else altogether?
The full pattern including a finalizer, introduction of a new virtual method and "sealing" of the original dispose method is very general purpose, covering all bases.
Unless you have direct handles on unmanaged resources (which should be almost never) you don't need a finalizer.
If you seal your class (and my views on sealing classes wherever possible are probably well known by now - design for inheritance or prohibit it) there's no point in introducing a virtual method.
I can't remember the last time I implemented IDisposable in a "complicated" way vs doing it in the most obvious way, e.g.
public void Dispose()
{
somethingElse.Dispose();
}
One thing to note is that if you're going for really robust code, you should make sure that you don't try to do anything after you've been disposed, and throw ObjectDisposedException where appropriate. That's good advice for class libraries which will be used by developers all over the world, but it's a lot of work for very little gain if this is just going to be a class used within your own workspace.
It's not strictly necessary. It is part of the recommended Disposable pattern. If you haven't read the Framework Design Guidelines section on this (9.3 in the first edition, don't have the second edition handy sorry) then you should. Try this link.
It's useful for distinguishing between disposable cleanup and finalizable garbage-collection-is-trashing-me.
You don't have to do it that way but you should read up on it and understand why this is recommended before discounting it as unnecessary.
There's a bit of bias in the MSFT docs about the disposable pattern. There are two reasons you should implement IDisposable:
You've got fields of a type that implements IDisposable
You've got a finalizer.
Case 1 is pretty common in most code. Case 2 is pretty common in code that Microsoft writes, they were the ones that wrote the managed wrappers around the unmanaged resources, the ones that need finalization. But should be very uncommon in your code. After all, you've got all those nice .NET classes to do the dirty work for you. You just have to call their Dispose() methods.
Only case 2 requires the disposable pattern. Microsoft needs to use it a lot. You'll just need the simple Dispose() most of the time.
In addition to the other great answers, you may want to check these articles:
Implementing IDisposable and the Dispose Pattern Properly
IDisposable: What Your Mother Never Told You About Resource Deallocation (The Disposable Design Principle)
The additional method with the bool disposing came out of a framework design guideline somewhere. It is simply a pattern to allow your class to have the dispose method be able to be called multiple times without throwing an exception. It isn't absolutely needed. Technically you could do it in the dispose method.
Just to expand on what others have said: it's not just that you don't need the 'complex dispose', it's that you actually don't want it, for performance reasons.
If you go the 'complex dispose' route, and implement a finalizer, and then forget to explicitly dispose your object, your object (and anything it references) will survive an extra GC generation before it's really disposed (since it has to hang around one more time for the CLR to call the finalizer). This just causes more memory pressure that you don't need. Additionally, calling the finalizer on a whole heap of objects has a non-trivial cost.
So avoid, unless you (or your derived types) have unmanaged resources.
Oh, and while we're in the area: methods on your class which handle events from others must be 'safe' in the face of being invoked after your class has been disposed. Simplest is to just perform a no-op if the class is disposed. See http://blogs.msdn.com/ericlippert/archive/2009/04/29/events-and-races.aspx
One thing that it gives you is the ability to do work in Dispose() unrelated to finalization, and still clean up unmanaged resources.
Doing anything to a managed object other than 'yourself' in a finalizer is extremely... unpredictable. Most of this is due to the fact that your finalizers will be called in stage 2 shutdown of your AppDomain in a non-deterministic manner - so when your finalizer is called, it is extremely likely that objects that you still have references to have already been finalized.
Dispatching both the Dispose and finalizer calls to the same method allows you to share your shutdown code, while the boolean parameter allows you to skip the managed cleanup if you have any.
Also, the virtual-ness of the method provides an easy way for inheritors to add their own cleanup code, with less of a risk of inadvertently not calling yours.
If a class implements IDisposable.Dispose() and a derived class needs to add additional logic, that class must expose some kind of Dispose method that the derived class can chain to. Since some classes may implement IDisposable.Dispose() without having a public Dispose() method, it's useful to have a virtual method which will be protected in all implementations of IDisposable regardless of whether they have a public Dispose method or not. In most cases, the bool argument isn't really meaningful but should be thought of as a dummy argument to make the protected virtual Dispose(bool) have a different signature from the may-be-or-maybe-not-public Dispose().
Classes which doesn't use a protected virtual Dispose(bool) will require derived classes to handle their cleanup logic in a fashion which differs from the convention. Some languages like C++/CLI which are only equipped to extend IDisposable implementations which follow that convention may be unable to derive classes from non-standard implementations.

Categories

Resources