Good samples of using Finalizers in C# - c#

When I read a few articles about memory management in C#, I was confused by Finalizer methods.
There are so many complicated rules which related with them.
For instance, nobody knows when the finalizers will be called, they called even if code in ctor throws, CLR doesn't guarantee that all finalizers be called when programs shutdowt, etc.
For what finalizers can be used in real life?
The only one example which I found was program which beeps when GC starts.
Do you use Finalizers in your code and may have some good samples ?
UPD:
Finalizers can be used when developpers want to make sure that some class always disposed correctly through IDisposable. (link ; Thanks Steve Townsend)

There is an exhaustive discussion of Finalizer usage, with examples, here. Link courtesy of #SLaks at a related answer.
See also here for a more concise summary of when you need one (which is "not very often").
There's a nice prior answer here with another good real-world example.
To summarize with a pertinent extract:
Finalizers are needed to guarantee the
release of scarce resources back into
the operating system like file handles, sockets,
kernel objects, etc.
For more correct real-world examples, browse around affected classes in .Net:
https://learn.microsoft.com/en-us/search/?terms=.Finalize&scope=.NET
One valid reason I can think of when you might need to use a finalizer is if you wrap a third-party native code API in a managed wrapper, and the underlying native code API library requires the timely release of used operating system resources.

The best practice known to me is plain simple don't use them. There might however be some corner cases when you want to use a finalizer, particularly when dealing with unmanaged objects and you can't implement Dispose pattern (I do not know legacy issues) then you can implement Finalize method with caution (and it could reduce the performance of your system, make your objects undead and other possibly weird scenarios, minding the exceptions as they are uncatchable:)).
In 99% of cases just write the use Dispose pattern and use this method to clean after yourself and everything will be fine.

Related

Can anyone give a real time example on how is Destructors used?

I was learning Destructors, when I got a question -> When do we need to use destructors? and also at the same time, I was like Thinking on How these can be used in real life.
So Can anyone give a real time example on how is Destructors used?
There is a very good article by Eric Lippert. This is about finalizer/destructor and it has some tests. One of the test says:
When finalizers are being run because a process is being shut down,
the runtime sets a limit on how much time the finalizer thread gets to
spend making a good-faith effort to run all the finalizers. If that
limit is exceeded then the runtime simply stops running more
finalizers and shuts down the program
So it can be concluded that finalizers can be avoided to be written almost on all cases.
Read more about finalizer by Jon Skeet:
You should almost never use them. Basically you should only need them if you have a direct handle on an
unmanaged resource, and not only is that incredibly rare, but using
SafeHandle as a tiny level of indirection is a better idea anyway
(which handles clean-up for you)

Should I run Dispose before application exit?

Shoud I run Dispose before application exit?
For example, I create many objects and some of they have event subscribe:
var myObject=new MyClass();
myObject.OnEvent+=OnEventHandle;
And, for example, at my work i should use classes with IDisposable interface.
Then, I decide to close app and do this:
Enviroment.Exit(-1);
Am I right?
Should I call Dispose to all objects, wich implements IDisposable interface?
Can a memory leak occur?
P.S. This is server-side app, using WCF, MQ.
In this specific case, you may choose not to Dispose. I was sure I recollected a Raymond Chen analogy about not emptying the bins just before you have a building demolished.1
Your entire process is about to disappear. There's no need for you to do any cleanup of internal resources, since the OS is about to reclaim all of its resources.
However, you have to weigh this up against it a) appearing non-standard, b) potentially triggering warnings from e.g. stylecop, versus the expected reward in taking slightly less time to exit - do you really need to optimize this part of your application?
As others have commented, I'd usually choose to still wrap my disposable objects in usings, even though it may be strictly unnecessary in this case.
1This is the one about not doing anything in DLL_PROCESS_DETACH. The reasoning is similar.

Just how 'disposable' is ReaderWriterLockSlim?

I principally follow the IDisposable pattern, and for most classes that is justified. But ReaderWriterLockSlim made me question the viability of applying such pattern. All ReaderWriterLockSlim.Dispose does is close some event handles. So how important is it to Dispose such class with so few resources? In this case, I really wouldn't mind if the GC had to wait another round for the finalizers of the unmanaged resources to finish.
The consequence for applying the IDisposable pattern is considerable however, every class that uses a disposable class now has to implement IDisposable too. In my particular case, I am implementing a wrapper for HashSet. I don't particularly expect the requirement to dispose such object because, accidently, it uses a synchronizer which does.
Are there any reasons not to violate the disposable pattern in this case? While I am eager to, I wouldn't do so in practice, because violating consistency is much worse.
The problem with unmanaged OS handles is that handles come from a limited supply. The GC is not aware of this.
The pure memory consumption of a handle is not that big. Nothing more than an object in kernel memory and probably hash table entry somewhere.
You are right in that it is not enough to say: "You must always dispose all disposable objects". That rule is too simple. For example the Task class does not need to be disposed. If you know what you are doing you can take a looser stance regarding disposal. Be aware that not all team members might understand this point (now you can leave a link to this answer in the source code...).
If you are sure that you will not leak a lot of handles you can safely do this. Be aware that under edge conditions (load, bugs, ...) you might leak more that you anticipated causing production issues.
If this field is static you don't need to dispose of it, it will (righty) have the same lifetime as your application. I see it's not, lets move on.
The correct way to handle an IDisposable is to dispose of it. I think we need a good reason not do this.
Use another lock:
I think the best thing to do is to use Monitor or another lock, which will have the bonus of simplifying your code as well. ConcurrentDictionary and other framework classes seem to take this approach.
You are worried about lock conveys, but I'm not sure this is even solved by ReaderWriterLockSlim, the only real solution is to hold less locks and hold them for less time.
Don't dispose:
This needs a justification. Can you demonstrate needed performance benefits here?
If you have a few of these objects that are long lived, fine, not all disposables are equally weighty (it's not like your leaving a word document open), you will probably get away with it. As it has been pointed out, what is the point of disposing of all this milliseconds before the application closes anyway. I believe the destructor of an IDisposable is in meant to handle situations where the object is not disposed, although you can't be sure when or even if this is called.
If you have a long lived applciation with lots of short-lived usages of this class, however, you may run into trouble. You are baking in your assumptions about the use of your code, just be aware.

Limtations of dynamic objects in C#/Java

I'm basically a C++ guy trying to venture into C#. From the basic tutorial of C#, I happen to find that all objects are created and stored dynamically (also true for Java) and are accessed by references and hence there's no need for copy constructors. There is also no need of bitwise copy when passing objects to a function or returning objects from a function. This makes C# much simpler than C++.
However, I read somewhere that operating on objects exclusively through references imposes limitations on the type of operations that one can perform thus restricting the programmer of complete control. One limitation is that the programmer cannot precisely specify when an object can be destroyed.
Can someone please elaborate on other limitations? (with a sample code if required)
Most of the "limitations" are by design rather than considered a deficiency (you may not agree of course)
You cannot determine/you don't have to worry about
when an object is destroyed
where the object is in memory
how big it is (unless you are tuning the application)
using pointer arithmetic
accessing out side an object
accessing an object with the wrong type
sharing objects between threads is simpler
whether the object is on the stack or the heap. (The stack is being used more and more in Java)
fragmentation of memory (This is not true of all collectors)
Because of Garbage collection done in java we cannot predict when the object will get destroyed but it performs the work of destructor.
If you want to free up some resources then you can use finally block.
try {
} finally{
// dispose resources.
}
Having made a similar transition, the more you look into it, the more you do have to think about C#'s GC behaviour in all but the most straighforward cases. This is especially true when trying to handle unmanaged resources from managed code.
This article highlights a lot of the issues you may be interested in.
Personally I miss a reference counted alternative to IDisposable (more like shared_ptr), but that's probably a hangover from a C++ background.
The more I have to write my own plumbing to support C++ like programming, the more likely it is there is another C# mechanism I've overlooked, or I end up getting frustrated with C#. For example, swap and move are not common idioms in C# as far as I've seen and I miss them: other programmers with a C# background may well disagree about how useful those idioms are.

Why is it always necessary to implement IDisposable on an object that has an IDisposable member?

From what I can tell, it is an accepted rule that if you have a class A that has a member m that is IDisposable, A should implement IDisposable and it should call m.Dispose() inside of it.
I can't find a satisfying reason why this is the case.
I understand the rule that if you have unmanaged resources, you should provide a finalizer along with IDisposable so that if the user doesn't explicitly call Dispose, the finalizer will still clean up during GC.
However, with that rule in place, it seems like you shouldn't need to have the rule that this question is about. For instance...
If I have a class:
class MyImage{
private Image _img;
... }
Conventions states that I should have MyImage : IDisposable. But if Image has followed conventions and implemented a finalizer and I don't care about the timely release of resources, what's the point?
UPDATE
Found a good discussion on what I was trying to get at here.
But if Image has followed conventions and implemented a finalizer and I don't care about the timely release of resources, what's the point?
You've missed the point of Dispose entirely. It's not about your convenience. It's about the convenience of other components that might want to use those unmanaged resources. Unless you can guarantee that no other code in the system cares about the timely release of resources, and the user doesn't care about timely release of resources, you should release your resources as soon as possible. That's the polite thing to do.
In the classic Prisoner's Dilemma, a lone defector in a world of cooperators gains a huge benefit. But in your case, being a lone defector produces only the tiny benefit of you personally saving a few minutes by writing low-quality, best-practice-ignoring code. It's your users and all the programs they use that suffer, and you gain practically nothing. Your code takes advantage of the fact that other programs unlock files and release mutexes and all that stuff. Be a good citizen and do the same for them. It's not hard to do, and it makes the whole software ecosystem better.
UPDATE: Here is an example of a real-world situation that my team is dealing with right now.
We have a test utility. It has a "handle leak" in that a bunch of unmanaged resources aren't aggressively disposed; it's leaking maybe half a dozen handles per "task". It maintains a list of "tasks to do" when it discovers disabled tests, and so on. We have ten or twenty thousand tasks in this list, so we very quickly end up with so many outstanding handles -- handles that should be dead and released back into the operating system -- that soon none of the code in the system that is not related to testing can run. The test code doesn't care. It works just fine. But eventually the code being tested can't make message boxes or other UI and the entire system either hangs or crashes.
The garbage collector has no reason to know that it needs to run finalizers more aggressively to release those handles sooner; why should it? Its job is to manage memory. Your job is to manage handles, so you've got to do that job.
But if Image has followed conventions
and implemented a finalizer and I
don't care about the timely release of
resources, what's the point?
Then there isn't one, if you don't care about timely release, and you can ensure that the disposable object is written correct (in truth I never make an assumption like that, not even with MSs code. You never know when something accidentally slipped by). The point is that you should care, as you never know when it will cause a problem. Think about an open database connection. Leaving it hanging around, means that it isn't replaced in the pool. You can run out if you have several requests come in for one.
Nothing says you have to do it if you don't care. Think of it this way, it's like releasing variables in an unmanaged program. You don't have to, but it is highly advisable. If for no other reason the person inheriting from the program doesn't have to wonder why it wasn't taken care of and then try and clear it up.
Firstly, there's no guaranteeing when an object will be cleaned up by the finalizer thread - think about the case where a class has a reference to a sql connection. Unless you make sure this is disposed of promptly, you'll have a connection open for an unknown period of time - and you won't be able to reuse it.
Secondly, finalization is not a cheap process - you should be making sure that if your objects are disposed of properly you're calling GC.SuppressFinalize(this) to prevent finalization happening.
Expanding on the "not cheap" aspect, the finalizer thread is a high-priority thread. It will take resources away from your main application if you give it too much to do.
Edit: Ok, here's a blog article by Chris Brummie about Finalization, including why it is expensive. (I knew I'd read loads about this somewhere)
If you don't care about the timely release of resources, then indeed there is no point. If you can be sure that the code is only for your consumption and you've got plenty of free memory/resources why not let GC hoover it up when it chooses to. OTOH, if someone else is using your code and creating many instances of (e.g.) MyImage, it's going to be pretty difficult to control memory/resource usage unless it disposes nicely.
Many classes require that Dispose be called to ensure correctness. If some C# code uses an iterator with a "finally" block, for example, the code in that block will not run if an enumerator is created with that iterator and not disposed. While there a few cases where it would be impractical to ensure objects were cleaned up without finalizers, for the most part code which relies upon finalizers for correct operation or to avoid memory leaks is bad code.
If your code acquires ownership of an IDisposable object, then unless either the object's cleass is sealed or your code creates the object by calling a constructor (as opposed to a factory method) you have no way of knowing what the real type of the object is, and whether it can be safely abandoned. Microsoft may have originally intended that it should be safe to abandon any type of object, but that is unrealistic, and the belief that it should be safe to abandon any type of object is unhelpful. If an object subscribes to events, allowing for safe abandonment will require either adding a level of weak indirection to all events, or a level of (non-weak) indirection to all other accesses. In many cases, it's better to require that a caller Dispose an object correctly than to add significant overhead and complexity to allow for abandonment.
Note also, btw, that even when objects try to accommodate abandonment it can still be very expensive. Create a Microsoft.VisualBasic.Collection (or whatever it's called), add a few objects, and create and Dispose a million enumerators. No problem--executes very quickly. Now create and abandon a million enumeartors. Major snooze fest unless you force a GC every few thousand enumerators. The Collection object is written to allow for abandonment, but that doesn't mean it doesn't have a major cost.
If an object you're using implements IDisposable, it's telling you it has something important to do when you're finished with it. That important thing may be to release unmanaged resources, or unhook from events so that it doesn't handle events after you think you're done with it, etc, etc. By not calling the Dispose, you're saying that you know better about how that object operates than the original author. In some tiny edge cases, this may actually be true, if you authored the IDisposable class yourself, or you know of a bug or performance problem related to calling Dispose. In general, it's very unlikely that ignoring a class requesting you to dispose it when you're done is a good idea.
Talking about finalizers - as has been pointed out, they have a cost, which can be avoided by Disposing the object (if it uses SuppressFinalize). Not just the cost of running the finalizer itself, and not just the cost of having to wait till that finalizer is done before the GC can collect the object. An object with a finalizer survives the collection in which it is identified as being unused and needing finalization. So it will be promoted (if it's not already in gen 2). This has several knock on effects:
The next higher generation will be collected less frequently, so after the finalizer runs, you may be waiting a long time before the GC comes around to that generation and sweeps your object away. So it can take a lot longer to free memory.
This adds unnecessary pressure to the collection the object is promoted to. If it's promoted from gen 0 to gen 1, then now gen 1 will fill up earlier than it needs to.
This can lead to more frequent garbage collections at higher generations, which is another performance hit.
If the object's finalizer isn't completed by the time the GC comes around to the higher generation, the object can be promoted again. Hence in a bad case you can cause an object to be promoted from gen 0 to gen 2 without good reason.
Obviously if you're only doing this on one object it's not likely to cost you anything noticeable. If you're doing it as general practice because you find calling Dispose on objects you're using tiresome, then it can lead to all of the problems above.
Dispose is like a lock on a front door. It's probably there for a reason, and if you're leaving the building, you should probably lock the door. If it wasn't a good idea to lock it, there wouldn't be a lock.
Even if you don't care in this particular case, you should still follow the standard because you will care in some cases. It's much easier to set a standard and follow it always based on specific guidelines than have a standard that you sometimes disregard. This is especially true as your team grows and your product ages.

Categories

Resources