Should I run Dispose before application exit? - c#

Shoud I run Dispose before application exit?
For example, I create many objects and some of they have event subscribe:
var myObject=new MyClass();
myObject.OnEvent+=OnEventHandle;
And, for example, at my work i should use classes with IDisposable interface.
Then, I decide to close app and do this:
Enviroment.Exit(-1);
Am I right?
Should I call Dispose to all objects, wich implements IDisposable interface?
Can a memory leak occur?
P.S. This is server-side app, using WCF, MQ.

In this specific case, you may choose not to Dispose. I was sure I recollected a Raymond Chen analogy about not emptying the bins just before you have a building demolished.1
Your entire process is about to disappear. There's no need for you to do any cleanup of internal resources, since the OS is about to reclaim all of its resources.
However, you have to weigh this up against it a) appearing non-standard, b) potentially triggering warnings from e.g. stylecop, versus the expected reward in taking slightly less time to exit - do you really need to optimize this part of your application?
As others have commented, I'd usually choose to still wrap my disposable objects in usings, even though it may be strictly unnecessary in this case.
1This is the one about not doing anything in DLL_PROCESS_DETACH. The reasoning is similar.

Related

Just how 'disposable' is ReaderWriterLockSlim?

I principally follow the IDisposable pattern, and for most classes that is justified. But ReaderWriterLockSlim made me question the viability of applying such pattern. All ReaderWriterLockSlim.Dispose does is close some event handles. So how important is it to Dispose such class with so few resources? In this case, I really wouldn't mind if the GC had to wait another round for the finalizers of the unmanaged resources to finish.
The consequence for applying the IDisposable pattern is considerable however, every class that uses a disposable class now has to implement IDisposable too. In my particular case, I am implementing a wrapper for HashSet. I don't particularly expect the requirement to dispose such object because, accidently, it uses a synchronizer which does.
Are there any reasons not to violate the disposable pattern in this case? While I am eager to, I wouldn't do so in practice, because violating consistency is much worse.
The problem with unmanaged OS handles is that handles come from a limited supply. The GC is not aware of this.
The pure memory consumption of a handle is not that big. Nothing more than an object in kernel memory and probably hash table entry somewhere.
You are right in that it is not enough to say: "You must always dispose all disposable objects". That rule is too simple. For example the Task class does not need to be disposed. If you know what you are doing you can take a looser stance regarding disposal. Be aware that not all team members might understand this point (now you can leave a link to this answer in the source code...).
If you are sure that you will not leak a lot of handles you can safely do this. Be aware that under edge conditions (load, bugs, ...) you might leak more that you anticipated causing production issues.
If this field is static you don't need to dispose of it, it will (righty) have the same lifetime as your application. I see it's not, lets move on.
The correct way to handle an IDisposable is to dispose of it. I think we need a good reason not do this.
Use another lock:
I think the best thing to do is to use Monitor or another lock, which will have the bonus of simplifying your code as well. ConcurrentDictionary and other framework classes seem to take this approach.
You are worried about lock conveys, but I'm not sure this is even solved by ReaderWriterLockSlim, the only real solution is to hold less locks and hold them for less time.
Don't dispose:
This needs a justification. Can you demonstrate needed performance benefits here?
If you have a few of these objects that are long lived, fine, not all disposables are equally weighty (it's not like your leaving a word document open), you will probably get away with it. As it has been pointed out, what is the point of disposing of all this milliseconds before the application closes anyway. I believe the destructor of an IDisposable is in meant to handle situations where the object is not disposed, although you can't be sure when or even if this is called.
If you have a long lived applciation with lots of short-lived usages of this class, however, you may run into trouble. You are baking in your assumptions about the use of your code, just be aware.

C# Singleton Log Class that Stores Messages on Disk On Destruction

I have a ASP.NET C# singleton (scoped to a session through HttpContext.Current.Session) which receives messages from code (warnings, errors, exceptions etc.). The messages are related to code problems, mainly used while debugging, not during production.
I wrote the custom destructor for this object so that its contents are written/appended as a file on a disk.
I wanted to ask two things related to this situation:
a] Is is a good idea to open a file and write to it during object destructor? The concurrency IO access is handled through static lock.
b] When is the destructor called for session scoped objects? Is it only when the session is expired on the server?
I also recommend using some existing logging package. If you do decide to do this yourself, and just to bare in mind for the future:
a) No it's not a good idea. You shouldn't access managed resources in the finalizer (destructor), so if you have some log strings in memory for example, it's bad practice to access them (or the list they are contained in) as they may have already been finalized themselves at this point.
I don't want to repeat the recommended pattern, so see https://stackoverflow.com/a/1943856/2586804
You'll see there is only one place you should access managed during Dispose and this is if it is called by user code and not be the GC. So this should help you come to the conclusion that to achieve this, you must call .Dispose() yourself (or by using a using) as when (and if) the GC does it, it cannot access the managed members that contain the log lines.
b) Don't know, but it doesn't matter as you cannot use finalizer for this purpose anyway.
The bottom line is you can't rely on GC to run code for you. It's bad practice because you don't know when it's going to happen, plus any reference to the object anywhere, now or in the future will prevent the object being collected and introduce a bug.
You also shouldn't get c# Finalizers/Destructors to run code because that's not what they are for, they are for freeing unmanaged resources so that the machine doesn't run out. And note, it's a rare occurrence to use them in C# because most peoples day-to-day work is all with managed objects.
Instead explicitly tell the object to write it's logs, a method called Flush would be a good name. Or just let it write one line at a time. This would be the usual behaviour.
To accomplish this implement IDisposable and then wrap instances of your logger in a using block which will guarantee to call the dispose method on your logger.

Why is it always necessary to implement IDisposable on an object that has an IDisposable member?

From what I can tell, it is an accepted rule that if you have a class A that has a member m that is IDisposable, A should implement IDisposable and it should call m.Dispose() inside of it.
I can't find a satisfying reason why this is the case.
I understand the rule that if you have unmanaged resources, you should provide a finalizer along with IDisposable so that if the user doesn't explicitly call Dispose, the finalizer will still clean up during GC.
However, with that rule in place, it seems like you shouldn't need to have the rule that this question is about. For instance...
If I have a class:
class MyImage{
private Image _img;
... }
Conventions states that I should have MyImage : IDisposable. But if Image has followed conventions and implemented a finalizer and I don't care about the timely release of resources, what's the point?
UPDATE
Found a good discussion on what I was trying to get at here.
But if Image has followed conventions and implemented a finalizer and I don't care about the timely release of resources, what's the point?
You've missed the point of Dispose entirely. It's not about your convenience. It's about the convenience of other components that might want to use those unmanaged resources. Unless you can guarantee that no other code in the system cares about the timely release of resources, and the user doesn't care about timely release of resources, you should release your resources as soon as possible. That's the polite thing to do.
In the classic Prisoner's Dilemma, a lone defector in a world of cooperators gains a huge benefit. But in your case, being a lone defector produces only the tiny benefit of you personally saving a few minutes by writing low-quality, best-practice-ignoring code. It's your users and all the programs they use that suffer, and you gain practically nothing. Your code takes advantage of the fact that other programs unlock files and release mutexes and all that stuff. Be a good citizen and do the same for them. It's not hard to do, and it makes the whole software ecosystem better.
UPDATE: Here is an example of a real-world situation that my team is dealing with right now.
We have a test utility. It has a "handle leak" in that a bunch of unmanaged resources aren't aggressively disposed; it's leaking maybe half a dozen handles per "task". It maintains a list of "tasks to do" when it discovers disabled tests, and so on. We have ten or twenty thousand tasks in this list, so we very quickly end up with so many outstanding handles -- handles that should be dead and released back into the operating system -- that soon none of the code in the system that is not related to testing can run. The test code doesn't care. It works just fine. But eventually the code being tested can't make message boxes or other UI and the entire system either hangs or crashes.
The garbage collector has no reason to know that it needs to run finalizers more aggressively to release those handles sooner; why should it? Its job is to manage memory. Your job is to manage handles, so you've got to do that job.
But if Image has followed conventions
and implemented a finalizer and I
don't care about the timely release of
resources, what's the point?
Then there isn't one, if you don't care about timely release, and you can ensure that the disposable object is written correct (in truth I never make an assumption like that, not even with MSs code. You never know when something accidentally slipped by). The point is that you should care, as you never know when it will cause a problem. Think about an open database connection. Leaving it hanging around, means that it isn't replaced in the pool. You can run out if you have several requests come in for one.
Nothing says you have to do it if you don't care. Think of it this way, it's like releasing variables in an unmanaged program. You don't have to, but it is highly advisable. If for no other reason the person inheriting from the program doesn't have to wonder why it wasn't taken care of and then try and clear it up.
Firstly, there's no guaranteeing when an object will be cleaned up by the finalizer thread - think about the case where a class has a reference to a sql connection. Unless you make sure this is disposed of promptly, you'll have a connection open for an unknown period of time - and you won't be able to reuse it.
Secondly, finalization is not a cheap process - you should be making sure that if your objects are disposed of properly you're calling GC.SuppressFinalize(this) to prevent finalization happening.
Expanding on the "not cheap" aspect, the finalizer thread is a high-priority thread. It will take resources away from your main application if you give it too much to do.
Edit: Ok, here's a blog article by Chris Brummie about Finalization, including why it is expensive. (I knew I'd read loads about this somewhere)
If you don't care about the timely release of resources, then indeed there is no point. If you can be sure that the code is only for your consumption and you've got plenty of free memory/resources why not let GC hoover it up when it chooses to. OTOH, if someone else is using your code and creating many instances of (e.g.) MyImage, it's going to be pretty difficult to control memory/resource usage unless it disposes nicely.
Many classes require that Dispose be called to ensure correctness. If some C# code uses an iterator with a "finally" block, for example, the code in that block will not run if an enumerator is created with that iterator and not disposed. While there a few cases where it would be impractical to ensure objects were cleaned up without finalizers, for the most part code which relies upon finalizers for correct operation or to avoid memory leaks is bad code.
If your code acquires ownership of an IDisposable object, then unless either the object's cleass is sealed or your code creates the object by calling a constructor (as opposed to a factory method) you have no way of knowing what the real type of the object is, and whether it can be safely abandoned. Microsoft may have originally intended that it should be safe to abandon any type of object, but that is unrealistic, and the belief that it should be safe to abandon any type of object is unhelpful. If an object subscribes to events, allowing for safe abandonment will require either adding a level of weak indirection to all events, or a level of (non-weak) indirection to all other accesses. In many cases, it's better to require that a caller Dispose an object correctly than to add significant overhead and complexity to allow for abandonment.
Note also, btw, that even when objects try to accommodate abandonment it can still be very expensive. Create a Microsoft.VisualBasic.Collection (or whatever it's called), add a few objects, and create and Dispose a million enumerators. No problem--executes very quickly. Now create and abandon a million enumeartors. Major snooze fest unless you force a GC every few thousand enumerators. The Collection object is written to allow for abandonment, but that doesn't mean it doesn't have a major cost.
If an object you're using implements IDisposable, it's telling you it has something important to do when you're finished with it. That important thing may be to release unmanaged resources, or unhook from events so that it doesn't handle events after you think you're done with it, etc, etc. By not calling the Dispose, you're saying that you know better about how that object operates than the original author. In some tiny edge cases, this may actually be true, if you authored the IDisposable class yourself, or you know of a bug or performance problem related to calling Dispose. In general, it's very unlikely that ignoring a class requesting you to dispose it when you're done is a good idea.
Talking about finalizers - as has been pointed out, they have a cost, which can be avoided by Disposing the object (if it uses SuppressFinalize). Not just the cost of running the finalizer itself, and not just the cost of having to wait till that finalizer is done before the GC can collect the object. An object with a finalizer survives the collection in which it is identified as being unused and needing finalization. So it will be promoted (if it's not already in gen 2). This has several knock on effects:
The next higher generation will be collected less frequently, so after the finalizer runs, you may be waiting a long time before the GC comes around to that generation and sweeps your object away. So it can take a lot longer to free memory.
This adds unnecessary pressure to the collection the object is promoted to. If it's promoted from gen 0 to gen 1, then now gen 1 will fill up earlier than it needs to.
This can lead to more frequent garbage collections at higher generations, which is another performance hit.
If the object's finalizer isn't completed by the time the GC comes around to the higher generation, the object can be promoted again. Hence in a bad case you can cause an object to be promoted from gen 0 to gen 2 without good reason.
Obviously if you're only doing this on one object it's not likely to cost you anything noticeable. If you're doing it as general practice because you find calling Dispose on objects you're using tiresome, then it can lead to all of the problems above.
Dispose is like a lock on a front door. It's probably there for a reason, and if you're leaving the building, you should probably lock the door. If it wasn't a good idea to lock it, there wouldn't be a lock.
Even if you don't care in this particular case, you should still follow the standard because you will care in some cases. It's much easier to set a standard and follow it always based on specific guidelines than have a standard that you sometimes disregard. This is especially true as your team grows and your product ages.

Good samples of using Finalizers in C#

When I read a few articles about memory management in C#, I was confused by Finalizer methods.
There are so many complicated rules which related with them.
For instance, nobody knows when the finalizers will be called, they called even if code in ctor throws, CLR doesn't guarantee that all finalizers be called when programs shutdowt, etc.
For what finalizers can be used in real life?
The only one example which I found was program which beeps when GC starts.
Do you use Finalizers in your code and may have some good samples ?
UPD:
Finalizers can be used when developpers want to make sure that some class always disposed correctly through IDisposable. (link ; Thanks Steve Townsend)
There is an exhaustive discussion of Finalizer usage, with examples, here. Link courtesy of #SLaks at a related answer.
See also here for a more concise summary of when you need one (which is "not very often").
There's a nice prior answer here with another good real-world example.
To summarize with a pertinent extract:
Finalizers are needed to guarantee the
release of scarce resources back into
the operating system like file handles, sockets,
kernel objects, etc.
For more correct real-world examples, browse around affected classes in .Net:
https://learn.microsoft.com/en-us/search/?terms=.Finalize&scope=.NET
One valid reason I can think of when you might need to use a finalizer is if you wrap a third-party native code API in a managed wrapper, and the underlying native code API library requires the timely release of used operating system resources.
The best practice known to me is plain simple don't use them. There might however be some corner cases when you want to use a finalizer, particularly when dealing with unmanaged objects and you can't implement Dispose pattern (I do not know legacy issues) then you can implement Finalize method with caution (and it could reduce the performance of your system, make your objects undead and other possibly weird scenarios, minding the exceptions as they are uncatchable:)).
In 99% of cases just write the use Dispose pattern and use this method to clean after yourself and everything will be fine.

What's the purpose of implementing the IDisposable interface?

What's the purpose of implementing the IDisposable interface? I've seen some classes implementing it and I don't understand why.
If your class creates unmanaged resources, then you can implement IDisposable so that these resources will be cleaned up properly when the object is disposed of. You override Dispose and release them there.
When your classes makes use of some system resource, it's the class' responsibility to make sure the resource is freed too. By .Net design you're supposed to do that in the Dispose method of the class. The IDisposable interface marks that your class needs to free resource when it's no longer in use, and the Dispose method is made available so that users of your class can call it to free the consumed resources.
The IDisposable method is also essential if you want auto clean-up to work properly and want to use the using() statement.
As well as freeing unmanaged resources, objects can usefully perform some operation the moment they go out of scope. A useful example might be an timer object: such objects could print out the time elapsed since their construction in the Dispose() method. These objects could then be used to log the approximate time taken for some set of operations:
using(Timer tmr=new Timer("blah"))
{
// do whatever
}
This can be done manually, of course, but my feeling is that one should take advantage wherever possible of the compiler's ability to generate the right code automatically.
It all has to do with the garbage collection mechanism. Chris Sells describes garbage collection, finalizers, and the reason for the Dispose pattern (and the IDisposable interface) episode 10 of .NET Rocks! (starting about 34 minutes in).
Many objects manipulate other entities in ways that will cause problems if not cleaned up. These other entities may be almost anything, and they may be almost anywhere. As an example, a Socket object may ask another machine to open up a TCP connection. That other machine might not be capable of handling very many connections at once; indeed, it could be a web-equipped appliance that can only handle one connection at a time. If a program were to open a socket and simply forget about it, no other computer would be able to connect to the appliance unless or until the socket got closed (perhaps the appliance might close the socket itself after a few minutes of inactivity, but it would be useless until then).
If an object implements IDisposable, that means it has the knowledge and impetus required to perform necessary cleanup actions, and such actions need to be performed before such knowledge and impetus is lost. Calling IDisposable.Dispose will ensure that all such cleanup actions get carried out, whereupon the object may be safely abandoned.
Microsoft allows for objects to request protection from abandonment by registering a method called Finalize. If an object does so, the Finalize method will be called if the system detects that the object has been abandoned. Neither the object, nor any objects to which it holds direct or indirect references, will be erased from memory until the Finalize method has been given a chance to run. This provides something of a "backstop" in case an object is abandoned without being first Disposed. There are many traps, however, with objects that implement Finalize, since there's no guarantee as to when it will be called. Not only might an object be abandoned a long time before Finalize gets called, but if one isn't careful the system may actually call Finalize on an object while part of it is still in use. Dangerous stuff. It's far better to use Dispose properly.

Categories

Resources