I principally follow the IDisposable pattern, and for most classes that is justified. But ReaderWriterLockSlim made me question the viability of applying such pattern. All ReaderWriterLockSlim.Dispose does is close some event handles. So how important is it to Dispose such class with so few resources? In this case, I really wouldn't mind if the GC had to wait another round for the finalizers of the unmanaged resources to finish.
The consequence for applying the IDisposable pattern is considerable however, every class that uses a disposable class now has to implement IDisposable too. In my particular case, I am implementing a wrapper for HashSet. I don't particularly expect the requirement to dispose such object because, accidently, it uses a synchronizer which does.
Are there any reasons not to violate the disposable pattern in this case? While I am eager to, I wouldn't do so in practice, because violating consistency is much worse.
The problem with unmanaged OS handles is that handles come from a limited supply. The GC is not aware of this.
The pure memory consumption of a handle is not that big. Nothing more than an object in kernel memory and probably hash table entry somewhere.
You are right in that it is not enough to say: "You must always dispose all disposable objects". That rule is too simple. For example the Task class does not need to be disposed. If you know what you are doing you can take a looser stance regarding disposal. Be aware that not all team members might understand this point (now you can leave a link to this answer in the source code...).
If you are sure that you will not leak a lot of handles you can safely do this. Be aware that under edge conditions (load, bugs, ...) you might leak more that you anticipated causing production issues.
If this field is static you don't need to dispose of it, it will (righty) have the same lifetime as your application. I see it's not, lets move on.
The correct way to handle an IDisposable is to dispose of it. I think we need a good reason not do this.
Use another lock:
I think the best thing to do is to use Monitor or another lock, which will have the bonus of simplifying your code as well. ConcurrentDictionary and other framework classes seem to take this approach.
You are worried about lock conveys, but I'm not sure this is even solved by ReaderWriterLockSlim, the only real solution is to hold less locks and hold them for less time.
Don't dispose:
This needs a justification. Can you demonstrate needed performance benefits here?
If you have a few of these objects that are long lived, fine, not all disposables are equally weighty (it's not like your leaving a word document open), you will probably get away with it. As it has been pointed out, what is the point of disposing of all this milliseconds before the application closes anyway. I believe the destructor of an IDisposable is in meant to handle situations where the object is not disposed, although you can't be sure when or even if this is called.
If you have a long lived applciation with lots of short-lived usages of this class, however, you may run into trouble. You are baking in your assumptions about the use of your code, just be aware.
Related
Shoud I run Dispose before application exit?
For example, I create many objects and some of they have event subscribe:
var myObject=new MyClass();
myObject.OnEvent+=OnEventHandle;
And, for example, at my work i should use classes with IDisposable interface.
Then, I decide to close app and do this:
Enviroment.Exit(-1);
Am I right?
Should I call Dispose to all objects, wich implements IDisposable interface?
Can a memory leak occur?
P.S. This is server-side app, using WCF, MQ.
In this specific case, you may choose not to Dispose. I was sure I recollected a Raymond Chen analogy about not emptying the bins just before you have a building demolished.1
Your entire process is about to disappear. There's no need for you to do any cleanup of internal resources, since the OS is about to reclaim all of its resources.
However, you have to weigh this up against it a) appearing non-standard, b) potentially triggering warnings from e.g. stylecop, versus the expected reward in taking slightly less time to exit - do you really need to optimize this part of your application?
As others have commented, I'd usually choose to still wrap my disposable objects in usings, even though it may be strictly unnecessary in this case.
1This is the one about not doing anything in DLL_PROCESS_DETACH. The reasoning is similar.
I came across the following quote "Desctructors are not guaranteed to be called." and this scares me a bit.
It goes on to say that even a try finally block can be interrupted, causing memory leaks.
It does give a solution by either placing your code in CER (constrained execution region) or derive from the CriticalFinalizerObject.
My question is
What are the tradoffs by using CriticalFinalizerObject, if any?
Are their any cases were you found deriving from CriticalFinalizerObject was really usefull?
Should I only worry about using this when I start running into Memory leaks?
Why are you relying on desctructors ? most of the time you don't have any need of them.
Perhaps have a look at IDisposeable and the Dispose Pattern.
Here some links that helpes me to understand this subject
-> Everybody thinks about garbage collection the wrong way
-> How To implement dispose Pattern
-> Implementing Finalize and Dispose to Clean Up Unmanaged Resources
Regarding question #3: memory leaks would typically be caused by:
Unmanaged resources not being freed. For those, using IDisposable (with a fallback call to Dispose() in the finalizer) is the best approach.
References to managed objects that are maintained because other objects still point to them, even though they are supposed to be removed. IHMO, that's more a problem of code quality than a low-level issue with garbage collection.
Unless you run into actual memory leaks, you should not even worry about it, and not try to force any behavior.
I would suggest using the IDisposable interface for all resources that need to be destroyed, and use them in a using block.
Typically the differences between a normal finalizers and critical finalizers only become important on AppDomain unload. Since most unmanaged resources automatically go away on process unload you usually need only to worry about critical finalization if you want to unload AppDomains cleanly while keeping the process running.
From what I can tell, it is an accepted rule that if you have a class A that has a member m that is IDisposable, A should implement IDisposable and it should call m.Dispose() inside of it.
I can't find a satisfying reason why this is the case.
I understand the rule that if you have unmanaged resources, you should provide a finalizer along with IDisposable so that if the user doesn't explicitly call Dispose, the finalizer will still clean up during GC.
However, with that rule in place, it seems like you shouldn't need to have the rule that this question is about. For instance...
If I have a class:
class MyImage{
private Image _img;
... }
Conventions states that I should have MyImage : IDisposable. But if Image has followed conventions and implemented a finalizer and I don't care about the timely release of resources, what's the point?
UPDATE
Found a good discussion on what I was trying to get at here.
But if Image has followed conventions and implemented a finalizer and I don't care about the timely release of resources, what's the point?
You've missed the point of Dispose entirely. It's not about your convenience. It's about the convenience of other components that might want to use those unmanaged resources. Unless you can guarantee that no other code in the system cares about the timely release of resources, and the user doesn't care about timely release of resources, you should release your resources as soon as possible. That's the polite thing to do.
In the classic Prisoner's Dilemma, a lone defector in a world of cooperators gains a huge benefit. But in your case, being a lone defector produces only the tiny benefit of you personally saving a few minutes by writing low-quality, best-practice-ignoring code. It's your users and all the programs they use that suffer, and you gain practically nothing. Your code takes advantage of the fact that other programs unlock files and release mutexes and all that stuff. Be a good citizen and do the same for them. It's not hard to do, and it makes the whole software ecosystem better.
UPDATE: Here is an example of a real-world situation that my team is dealing with right now.
We have a test utility. It has a "handle leak" in that a bunch of unmanaged resources aren't aggressively disposed; it's leaking maybe half a dozen handles per "task". It maintains a list of "tasks to do" when it discovers disabled tests, and so on. We have ten or twenty thousand tasks in this list, so we very quickly end up with so many outstanding handles -- handles that should be dead and released back into the operating system -- that soon none of the code in the system that is not related to testing can run. The test code doesn't care. It works just fine. But eventually the code being tested can't make message boxes or other UI and the entire system either hangs or crashes.
The garbage collector has no reason to know that it needs to run finalizers more aggressively to release those handles sooner; why should it? Its job is to manage memory. Your job is to manage handles, so you've got to do that job.
But if Image has followed conventions
and implemented a finalizer and I
don't care about the timely release of
resources, what's the point?
Then there isn't one, if you don't care about timely release, and you can ensure that the disposable object is written correct (in truth I never make an assumption like that, not even with MSs code. You never know when something accidentally slipped by). The point is that you should care, as you never know when it will cause a problem. Think about an open database connection. Leaving it hanging around, means that it isn't replaced in the pool. You can run out if you have several requests come in for one.
Nothing says you have to do it if you don't care. Think of it this way, it's like releasing variables in an unmanaged program. You don't have to, but it is highly advisable. If for no other reason the person inheriting from the program doesn't have to wonder why it wasn't taken care of and then try and clear it up.
Firstly, there's no guaranteeing when an object will be cleaned up by the finalizer thread - think about the case where a class has a reference to a sql connection. Unless you make sure this is disposed of promptly, you'll have a connection open for an unknown period of time - and you won't be able to reuse it.
Secondly, finalization is not a cheap process - you should be making sure that if your objects are disposed of properly you're calling GC.SuppressFinalize(this) to prevent finalization happening.
Expanding on the "not cheap" aspect, the finalizer thread is a high-priority thread. It will take resources away from your main application if you give it too much to do.
Edit: Ok, here's a blog article by Chris Brummie about Finalization, including why it is expensive. (I knew I'd read loads about this somewhere)
If you don't care about the timely release of resources, then indeed there is no point. If you can be sure that the code is only for your consumption and you've got plenty of free memory/resources why not let GC hoover it up when it chooses to. OTOH, if someone else is using your code and creating many instances of (e.g.) MyImage, it's going to be pretty difficult to control memory/resource usage unless it disposes nicely.
Many classes require that Dispose be called to ensure correctness. If some C# code uses an iterator with a "finally" block, for example, the code in that block will not run if an enumerator is created with that iterator and not disposed. While there a few cases where it would be impractical to ensure objects were cleaned up without finalizers, for the most part code which relies upon finalizers for correct operation or to avoid memory leaks is bad code.
If your code acquires ownership of an IDisposable object, then unless either the object's cleass is sealed or your code creates the object by calling a constructor (as opposed to a factory method) you have no way of knowing what the real type of the object is, and whether it can be safely abandoned. Microsoft may have originally intended that it should be safe to abandon any type of object, but that is unrealistic, and the belief that it should be safe to abandon any type of object is unhelpful. If an object subscribes to events, allowing for safe abandonment will require either adding a level of weak indirection to all events, or a level of (non-weak) indirection to all other accesses. In many cases, it's better to require that a caller Dispose an object correctly than to add significant overhead and complexity to allow for abandonment.
Note also, btw, that even when objects try to accommodate abandonment it can still be very expensive. Create a Microsoft.VisualBasic.Collection (or whatever it's called), add a few objects, and create and Dispose a million enumerators. No problem--executes very quickly. Now create and abandon a million enumeartors. Major snooze fest unless you force a GC every few thousand enumerators. The Collection object is written to allow for abandonment, but that doesn't mean it doesn't have a major cost.
If an object you're using implements IDisposable, it's telling you it has something important to do when you're finished with it. That important thing may be to release unmanaged resources, or unhook from events so that it doesn't handle events after you think you're done with it, etc, etc. By not calling the Dispose, you're saying that you know better about how that object operates than the original author. In some tiny edge cases, this may actually be true, if you authored the IDisposable class yourself, or you know of a bug or performance problem related to calling Dispose. In general, it's very unlikely that ignoring a class requesting you to dispose it when you're done is a good idea.
Talking about finalizers - as has been pointed out, they have a cost, which can be avoided by Disposing the object (if it uses SuppressFinalize). Not just the cost of running the finalizer itself, and not just the cost of having to wait till that finalizer is done before the GC can collect the object. An object with a finalizer survives the collection in which it is identified as being unused and needing finalization. So it will be promoted (if it's not already in gen 2). This has several knock on effects:
The next higher generation will be collected less frequently, so after the finalizer runs, you may be waiting a long time before the GC comes around to that generation and sweeps your object away. So it can take a lot longer to free memory.
This adds unnecessary pressure to the collection the object is promoted to. If it's promoted from gen 0 to gen 1, then now gen 1 will fill up earlier than it needs to.
This can lead to more frequent garbage collections at higher generations, which is another performance hit.
If the object's finalizer isn't completed by the time the GC comes around to the higher generation, the object can be promoted again. Hence in a bad case you can cause an object to be promoted from gen 0 to gen 2 without good reason.
Obviously if you're only doing this on one object it's not likely to cost you anything noticeable. If you're doing it as general practice because you find calling Dispose on objects you're using tiresome, then it can lead to all of the problems above.
Dispose is like a lock on a front door. It's probably there for a reason, and if you're leaving the building, you should probably lock the door. If it wasn't a good idea to lock it, there wouldn't be a lock.
Even if you don't care in this particular case, you should still follow the standard because you will care in some cases. It's much easier to set a standard and follow it always based on specific guidelines than have a standard that you sometimes disregard. This is especially true as your team grows and your product ages.
I'd like to know when i should and shouldn't be wrapping things in a USING block.
From what I understand, the compiler translates it into a try/finally, where the finally calls Dispose() on the object.
I always use a USING around database connections and file access, but its more out of habit rather than a 100% understanding. I know you should explicity (or with a using) Dispose() objects which control resources, to ensure they are released instantly rather than whenever the CLR feels like it, but thats where my understanding breaks down.
Are IDisposables not disposed of when they go out of scope?
Do I only need to use a USING when my object makes use of Dispose to tidy itself up?
Thanks
Edit: I know there are a couple of other posts on the USING keyword, but I'm more interested in answers relating the the CLR and exactly whats going on internally
Andrew
No, IDisposable items are not disposed when they go out of scope. It is for precisely this reason that we need IDisposable - for deterministic cleanup.
They will eventually get garbage collected, and if there is a finalizer it will (maybe) be called - but that could be a long time in the future (not good for connection pools etc). Garbage collection is dependent on memory pressure - if nothing wants extra memory, there is no need to run a GC cycle.
Interestingly (perhaps) there are some cases where "using" is a pain - when the offending class throws an exception on Dispose() sometimes. WCF is an offender of this. I have discussed this topic (with a simple workaround) here.
Basically - if the class implements IDisposable, and you own an instance (i.e. you created it or whatever), it is your job to ensure that it gets disposed. That might mean via "using", or it might mean passing it to another piece of code that assumes responsibility.
I've actually seen debug code of the type:
#if DEBUG
~Foo() {
// complain loudly that smoebody forgot to dispose...
}
#endif
(where the Dispose calls GC.SuppressFinalize)
"Are IDisposables not disposed of when
they go out of scope?"
No. If the IDisposable object is finalizable, which is not the same thing, then it will be finalized when it's garbage collected.
Which might be soon or might be almost never.
Jeff Richter's C#/CLR book is very good on all this stuff, and the Framework Design Guidelines book is also useful.
Do I only need to use a USING when my
object makes use of Dispose to tidy
itself up?
You can only use 'using' when the object implements IDisposable. The compiler will object if you try to do otherwise.
To add to the other answers, you should use using (or an explicit Dispose) whenever an object holds any resources other than managed memory. Examples would be things like files, sockets, database connections, or even GDI drawing handles.
The garbage collector would eventually finalise these objects, but only at some unspecified time in the future. You can't rely on it happening in a timely fashion, and you might have run out of that resource in the meantime.
Is there a catch or hidden problem in using a DisposableBase base class instead of recoding the Dispose pattern on every class?
Why aren't everyone using such a relevant class?
Edits:
I naturally only meant classes that implement IDisposable
I know it uses up the option for inheritance, but I'm willing to pay the price (at least when I can and it doesn't hurt me otherwise).
When I can seal the class, I do - but I have some cases where I want the base of an inheritance hierarchy to be Disposable.
You don't need to implement Dispose() on every class - just those with something that needs deterministic cleanup. Re a Disposable base-class, I'm not entirely sure it provides a whole lot - IDisposable isn't a complex interface. The main time it might be useful is if you are handling unmanaged resources and want a finalizer, but even then it isn't much code.
Personally, I wouldn't bother with such a base class. In particular, inheritance (in a single-inheritance world) gets restrictive very quickly. But more to the point, overriding a method isn't much different to simply providing a public Dispose() method.
Again: you only need a finalizer etc if you are handling unmanaged objects.
If I had a lot of these (unmanaged resouces), I might see whether I could get PostSharp to do the work for me. I don't know if one already exists, but it might be possible to create an aspect that handles (in particular) the finalizer etc. Who knows...
Well, it uses up your one option for inheritance to describe a single aspect of your class - that's not ideal, IMO. It would be interesting to try to do something with composition, where you have a reference to a DisposableHelper and the implementation of IDisposable just calls helper.Dispose, which has the rest of the boilerplate logic in - and can call back to your code via a callback delegate. Hmm. Subclasses could subscribe to a protected Disposing event to register "I need to do something"... it might be worth looking at some time.
Personally I don't find myself implementing IDisposable often enough to make it an issue - and when I do, I typically seal my classes anyway, so half of the stuff in the pattern becomes a non-issue.
As Marc Gravell said, you only need a finalizer if you are handling unmanaged objects. Introducing an unnecessary finalizer in a base class is a bad idea, as per the reasons in section 1.1.4 of the Dispose, Finalization, and Resource Management guidelines:
There is a real cost associated with
instances with finalizers, both from a
performance and code complexity
standpoint. ... Finalization increases the cost and duration of
your object’s lifetime as each
finalizable object must be placed on a
special finalizer registration queue
when allocated, essentially creating
an extra pointer-sized field to refer
to your object. Moreover, objects in
this queue get walked during GC,
processed, and eventually promoted to
yet another queue that the GC uses to
execute finalizers. Increasing the
number of finalizable objects directly
correlates to more objects being
promoted to higher generations, and an
increased amount of time spent by the
GC walking queues, moving pointers
around, and executing finalizers.
Also, by keeping your object’s state
around longer, you tend to use memory
for a longer period of time, which
leads to an increase in working set.
If you use SafeHandle (and related classes), it's unlikely that any classes that derive from DisposableBase would ever need to be finalized.