Why is there no contract for disposal? - c#

We have constructors and we can treat them as contracts to follow for object instantiation.
There's no other way to create an instance without providing exact set of parameters to constructor.
But how can we (and should we ever bother) enforce some pre-mortem activity?
We've got the finalizer but they are not recommended for general purpose finalization.
We also have IDisposable to implement. But if we work with a disposable object without using we have no guarantee that Dispose will be ever called.
Why is there now way to enforce some state of the object before it will be let go off?
Tiding up in finalizer is impossible because there's no guarantee that object graph is intact and referenced object of dying object are not already nulled by GC.
Sure, not calling for instance object's SaveState() by the client code gives some troubles to it, not to my object.
Nonetheless it is considered to be a good practice to require all needed dependencies to be injected in constructor (if no default value is available). Nobody readily says: "leave default constructor, create properties and throw exceptions if object is in invalid state."
Update:
As there are many votes for closing the question I'd say that some design patterns for this can also be an answer.
Whether you use DI or not you can just count how many times object was requested/created. But without explicit release call you do not know the moment when you should call dispose.
I simply do not understand how to implement disposals at right time.

Why is there no way to enforce some state of the object before it will be let go of?
Because the whole point of a garbage collector is to simulate a machine that has an infinite amount of memory. If memory is infinite then you never need to clean it up.
You're conflating a semantic requirement of your program -- that a particular side effect occur at a particular time -- with the mechanisms of simulating infinite storage. In an ideal world those two things should not have anything to do with each other. Unfortunately we do not live in an ideal world; the existence of finalizers is evidence of that.
If there are effects that you want to achieve at a particular time, then those effects are part of your program and you should write code that achieves them. If they are important then they should be visible in the code so that people reading the code can see and understand them.

Unfortunately, during the design of Java, it was anticipated that the garbage collector should be able to satisfy all cleanup requirements. That seems to have been a belief during early design stages of .NET as well.
Consequently, no distinction is made between:
an object reference which encapsulates exclusive ownership of its target;
a reference to an object which does not encapsulate ownership(its target is owned by someone else);
a reference whose owner knows that it will either encapsulate exclusive ownership or encapsulate none, and knows which case applies for the instance at hand;
or an object reference which encapsulates shared ownership.
If a language and framework were properly designed around such distinctions, it would seldom be necessary to write code where proper cleanup could not be statically verified (the first two cases, which probably apply 90%+ of the time, could easily be statically verified even with the .NET framework).
Unfortunately, because no such distinctions exist outside the very limited context of using statements, there's no way for a compiler or verifier to know, when a piece of code abandons a reference, whether anything else is expecting to clean up the object referred to thereby.
Consequently, there's no way to know in general whether the object should be disposed, and no generally-meaningful way to squawk if it should be but isn't.

Related

To Dispose() or Not To Dispose() elements in an array of IDisposable objects?

There are lots of examples of Arrays or Lists of IDisposable objects being returned from functions in .NET. For example, Process.GetProcesses().
If I call that method is it my responsibility to Dispose() of all the members of the array as I iterate through them?
Why should it be my responsibility since I never created the objects and the array that I was given is just pointers to the objects which were created outside of my code.
I always thought it was the creator's burden to Dispose().
So what is the proper rule here?
There is no general rule. It's going to depend on the situation, and how the method in question is designed, as to whether or not "you" are responsible for disposing of objects you have access to. This is where documentation is often important to help users of the type understand their responsibilities.
I always thought it was the creator's burden to Dispose()
This cannot be strictly true. It is sometimes the case that a disposable object will out-live the lifetime of the block of code creating it. While it simplest when the creator can dispose of the object, sometimes it's simply impossible for them to be able to. When returning a disposable object from a method is one situation where it's often not be possible for the code creating the disposable object to clean it up, as it's lifetime needs to be smaller than the lifetime of the disposable object.
With relatively few exceptions (most of which could be described as least-of-evils approaches to dealing with poorly-designed code that can't be changed), every IDisposable instance should at any given moment in time have exactly one well-defined owner. In cases where a method returns something of a type that implements IDisposable, the contract for the method will specify whether the method is relinquishing ownership (in which case the caller should ensure that the object gets disposed--either by disposing of the object itself or relinquishing ownership to someone else), or whether the method is merely returning a reference to an object which is owned by someone else.
In properly-written code, the question of whether or not an object should be disposed is rarely a judgment call. The owner of an object should ensure that it gets disposed; nobody else should dispose it. Occasionally it may be necessary to have a method accept a parameter indicating whether the method should transfer ownership of an IDisposable. For example, if code wants to create a sound, pass it to a "start playing sound" method, and never wants to deal with that sound again, it may be most convenient to have the code to play the sound accept take and dispose the sound when it's done; if code wants to be able to play a sound repeatedly, however, and will ensure that the sound object will stay alive as long as it's needed, it would be more convenient for the sound-playing code to not take ownership. Using separate methods may in some ways be cleaner, but using a parameter can aid encapsulation.
Generally, when code returns a list of objects that implement IDisposable, the purpose of the code is to identify objects without conveying any ownership interest in them. In the absence of an ownership interest, code receiving such a list should not call Dispose on it.
The GetProcesses method does not allocate any handles (or other non managed resources) within the Process instances returned.
Only if you call certain methods on the returned Process instances are handles created, in almost all cases these are released before returning (e.g. Process.Kill).
Therefore it is completely unnecessary in most situations to dispose every Process instance returned.
The rule is very simple: if you think that other programs will use your IDisposables, then do not destroy them. Otherwise, do it.
For example: GetProcesses() returns other processes being potentialy used by other programs, so you shouldn't dispose them.
From other side, files you've opened should be released for other processes in OS, so you should close and dispose wrapper streams above them (say, you should dispose steam returned by File.Open method).
Update:
From MSDN:
DO implement the Basic Dispose Pattern on types containing
instances of disposable types. See the Basic Dispose Pattern section
for details on the basic pattern. If a type is responsible for the
lifetime of other disposable objects, developers need a way to dispose
of them, too. Using the container’s Dispose method is a convenient way
to make this possible.
DO implement the Basic Dispose Pattern
and provide a finalizer on types holding resources that need to be
freed explicitly and that do not have finalizers.
For example, the pattern should be implemented on types storing unmanaged memory buffers. The Finalizable Types section discusses guidelines related to
implementing finalizers.
CONSIDER implementing the Basic Dispose Pattern on classes that themselves don’t hold unmanaged resources or disposable objects but are likely to have subtypes that do.

Is there any reason the C# / .NET compiler(s) do not warn about Dispose()?

I was thinking about this just today whilst I was writing some IDisposable code.
It's good practice for the developer to either call Dispose() directly, or if the lifetime of the object allows, to use the using construct.
The only instances we need to worry about, are those where we can't use using due to the mechanics of our code. But we should, at some point, be calling Dispose() on these objects.
Given that the C# compiler knows an object implements IDisposable, it could theoretically also know that Dispose() was never called on it (it's a pretty clever compiler as it is!). It may not know the semantics of when the programmer should do it, but it could serve as a good reminder that it never is being called because it was never used in a using construct, and the method Dispose() was never called directly, on any object that implements IDisposable.
Any reason for this, or are there thoughts to go down that route?
it could theoretically also know that Dispose() was never called on it
It could determine in certain simple cases that Dispose will never be called on it. It is not possible to determine, solely based on a static analysis of the code, that all created instances will be disposed of. Code also does not need to be very complex at all to get to the point to which even estimating if objects are left undisposed is straightforward to do.
To make matters worse, not all IDisposable object instances should be disposed. There can be a number of reasons for this. Sometimes an object implements IDisposable even though only a portion of their instances actually do anything in the implementation. (IEnumerator<T> is a good example of this. A large number of implementations do nothing when disposed, but some do. If you know what the specific implementation you have won't ever do anything on disposal you can not bother; if you don't know that you need to ensure you call Dispose.
Then there are types such as Task that almost never actually need to be disposed. (See Do I need to dispose of Tasks?.) In the vast majority of cases you don't need to dispose of them, and needlessly cluttering your code with using blocks or dispose calls that do nothing hampers readability.
The major rule regarding IDisposable is "would the last one to leave the room, please turn off the lights". One major failing in the design of most .NET languages is that there is no general syntactic (or even attribute-tagging) convention to indicate whether the code that holds a particular variable or class that holds a particular field will:
Always be the last one to leave the room
Never be the last one to leave the room
Sometimes be the last one to leave the room, and easily know at runtime whether it will be (e.g. because whoever gave it a reference told it).
Possibly be the last one to leave the room, but not know before it leaves the room whether it will be the last one out.
If languages had a syntax to distinguish among those cases, then it would be simple for a compiler to ensure that things which know they're going to be the last one to leave the room turn out the lights and things which are never going to be the last one to leave the room don't turn out the lights. A compiler or framework could facilitate the third and fourth scenarios if the framework included wrapper types that the compiler knew about. Conventional reference-counting is generally not a good as a primary mechanism to determine when objects are no longer needed, since it requires processor interlocks every time a reference is copied or destroyed even if the holder of the copy knows it won't be "the last one to leave the room", but a variation on reference-counting is often the cheapest and most practical way to handle scenario #4 [copying a reference should only increment the counter if the holders of both the original and copy are going to think that they might be the last owner, and destroying a copy of a reference should only decrement the counter if the reference had been incremented when that copy was created].
In the absence of a convention to indicate whether a particular reference should be considered "the last one in the room", there's no good way for a compiler to know whether the holder of that reference should "turn out the lights" (i.e. call Dispose). Both VB.NET and C# have a special using syntax for one particular situation where the holder of a variable knows it will be the last one to leave the room, but beyond that the compilers can't really demand that things be cleaned up if they don't understand them. C++/CLI does have a more general-purpose syntax, but unfortunately it has many restrictions on its use.
The code analysis rules will detect this. Depending on your version of VS you can either use FXCop or the built in analysis rules.
It requires static analysis of the code after it has been compiled.

Why is it always necessary to implement IDisposable on an object that has an IDisposable member?

From what I can tell, it is an accepted rule that if you have a class A that has a member m that is IDisposable, A should implement IDisposable and it should call m.Dispose() inside of it.
I can't find a satisfying reason why this is the case.
I understand the rule that if you have unmanaged resources, you should provide a finalizer along with IDisposable so that if the user doesn't explicitly call Dispose, the finalizer will still clean up during GC.
However, with that rule in place, it seems like you shouldn't need to have the rule that this question is about. For instance...
If I have a class:
class MyImage{
private Image _img;
... }
Conventions states that I should have MyImage : IDisposable. But if Image has followed conventions and implemented a finalizer and I don't care about the timely release of resources, what's the point?
UPDATE
Found a good discussion on what I was trying to get at here.
But if Image has followed conventions and implemented a finalizer and I don't care about the timely release of resources, what's the point?
You've missed the point of Dispose entirely. It's not about your convenience. It's about the convenience of other components that might want to use those unmanaged resources. Unless you can guarantee that no other code in the system cares about the timely release of resources, and the user doesn't care about timely release of resources, you should release your resources as soon as possible. That's the polite thing to do.
In the classic Prisoner's Dilemma, a lone defector in a world of cooperators gains a huge benefit. But in your case, being a lone defector produces only the tiny benefit of you personally saving a few minutes by writing low-quality, best-practice-ignoring code. It's your users and all the programs they use that suffer, and you gain practically nothing. Your code takes advantage of the fact that other programs unlock files and release mutexes and all that stuff. Be a good citizen and do the same for them. It's not hard to do, and it makes the whole software ecosystem better.
UPDATE: Here is an example of a real-world situation that my team is dealing with right now.
We have a test utility. It has a "handle leak" in that a bunch of unmanaged resources aren't aggressively disposed; it's leaking maybe half a dozen handles per "task". It maintains a list of "tasks to do" when it discovers disabled tests, and so on. We have ten or twenty thousand tasks in this list, so we very quickly end up with so many outstanding handles -- handles that should be dead and released back into the operating system -- that soon none of the code in the system that is not related to testing can run. The test code doesn't care. It works just fine. But eventually the code being tested can't make message boxes or other UI and the entire system either hangs or crashes.
The garbage collector has no reason to know that it needs to run finalizers more aggressively to release those handles sooner; why should it? Its job is to manage memory. Your job is to manage handles, so you've got to do that job.
But if Image has followed conventions
and implemented a finalizer and I
don't care about the timely release of
resources, what's the point?
Then there isn't one, if you don't care about timely release, and you can ensure that the disposable object is written correct (in truth I never make an assumption like that, not even with MSs code. You never know when something accidentally slipped by). The point is that you should care, as you never know when it will cause a problem. Think about an open database connection. Leaving it hanging around, means that it isn't replaced in the pool. You can run out if you have several requests come in for one.
Nothing says you have to do it if you don't care. Think of it this way, it's like releasing variables in an unmanaged program. You don't have to, but it is highly advisable. If for no other reason the person inheriting from the program doesn't have to wonder why it wasn't taken care of and then try and clear it up.
Firstly, there's no guaranteeing when an object will be cleaned up by the finalizer thread - think about the case where a class has a reference to a sql connection. Unless you make sure this is disposed of promptly, you'll have a connection open for an unknown period of time - and you won't be able to reuse it.
Secondly, finalization is not a cheap process - you should be making sure that if your objects are disposed of properly you're calling GC.SuppressFinalize(this) to prevent finalization happening.
Expanding on the "not cheap" aspect, the finalizer thread is a high-priority thread. It will take resources away from your main application if you give it too much to do.
Edit: Ok, here's a blog article by Chris Brummie about Finalization, including why it is expensive. (I knew I'd read loads about this somewhere)
If you don't care about the timely release of resources, then indeed there is no point. If you can be sure that the code is only for your consumption and you've got plenty of free memory/resources why not let GC hoover it up when it chooses to. OTOH, if someone else is using your code and creating many instances of (e.g.) MyImage, it's going to be pretty difficult to control memory/resource usage unless it disposes nicely.
Many classes require that Dispose be called to ensure correctness. If some C# code uses an iterator with a "finally" block, for example, the code in that block will not run if an enumerator is created with that iterator and not disposed. While there a few cases where it would be impractical to ensure objects were cleaned up without finalizers, for the most part code which relies upon finalizers for correct operation or to avoid memory leaks is bad code.
If your code acquires ownership of an IDisposable object, then unless either the object's cleass is sealed or your code creates the object by calling a constructor (as opposed to a factory method) you have no way of knowing what the real type of the object is, and whether it can be safely abandoned. Microsoft may have originally intended that it should be safe to abandon any type of object, but that is unrealistic, and the belief that it should be safe to abandon any type of object is unhelpful. If an object subscribes to events, allowing for safe abandonment will require either adding a level of weak indirection to all events, or a level of (non-weak) indirection to all other accesses. In many cases, it's better to require that a caller Dispose an object correctly than to add significant overhead and complexity to allow for abandonment.
Note also, btw, that even when objects try to accommodate abandonment it can still be very expensive. Create a Microsoft.VisualBasic.Collection (or whatever it's called), add a few objects, and create and Dispose a million enumerators. No problem--executes very quickly. Now create and abandon a million enumeartors. Major snooze fest unless you force a GC every few thousand enumerators. The Collection object is written to allow for abandonment, but that doesn't mean it doesn't have a major cost.
If an object you're using implements IDisposable, it's telling you it has something important to do when you're finished with it. That important thing may be to release unmanaged resources, or unhook from events so that it doesn't handle events after you think you're done with it, etc, etc. By not calling the Dispose, you're saying that you know better about how that object operates than the original author. In some tiny edge cases, this may actually be true, if you authored the IDisposable class yourself, or you know of a bug or performance problem related to calling Dispose. In general, it's very unlikely that ignoring a class requesting you to dispose it when you're done is a good idea.
Talking about finalizers - as has been pointed out, they have a cost, which can be avoided by Disposing the object (if it uses SuppressFinalize). Not just the cost of running the finalizer itself, and not just the cost of having to wait till that finalizer is done before the GC can collect the object. An object with a finalizer survives the collection in which it is identified as being unused and needing finalization. So it will be promoted (if it's not already in gen 2). This has several knock on effects:
The next higher generation will be collected less frequently, so after the finalizer runs, you may be waiting a long time before the GC comes around to that generation and sweeps your object away. So it can take a lot longer to free memory.
This adds unnecessary pressure to the collection the object is promoted to. If it's promoted from gen 0 to gen 1, then now gen 1 will fill up earlier than it needs to.
This can lead to more frequent garbage collections at higher generations, which is another performance hit.
If the object's finalizer isn't completed by the time the GC comes around to the higher generation, the object can be promoted again. Hence in a bad case you can cause an object to be promoted from gen 0 to gen 2 without good reason.
Obviously if you're only doing this on one object it's not likely to cost you anything noticeable. If you're doing it as general practice because you find calling Dispose on objects you're using tiresome, then it can lead to all of the problems above.
Dispose is like a lock on a front door. It's probably there for a reason, and if you're leaving the building, you should probably lock the door. If it wasn't a good idea to lock it, there wouldn't be a lock.
Even if you don't care in this particular case, you should still follow the standard because you will care in some cases. It's much easier to set a standard and follow it always based on specific guidelines than have a standard that you sometimes disregard. This is especially true as your team grows and your product ages.

Correct way of implementing Finalize and Dispose(When parent class implements IDisposable)

I was implementing Finalize and Dispose in my classes, I implemented IDisposable on my parent class and override the Dispose(bool) overload in my child classes. I was not sure
whether to use a duplicate isDisposed variable(as its already there in base class) or not?
Whether to implement a finalizer in child class too or not?
Both these things are done in example given here -
http://guides.brucejmack.biz/CodeRules/FxCop/Docs/Rules/Usage/DisposeMethodsShouldCallBaseClassDispose.html
Whereas example in this MSDN article doesn't have any of these two -
http://msdn.microsoft.com/en-us/library/b1yfkh5e.aspx
whereas this example in MSDN is not complete -
http://msdn.microsoft.com/en-us/library/ms182330.aspx
It's very rare for a finalizer to be useful. The documentation you link to isn't totally helpful - it offers the following rather circular advice:
Implement Finalize only on objects
that require finalization
That's an excellent example of begging the question, but it's not very helpful.
In practice, the vast majority of the time you don't want a finalizer. (One of the learning curves .NET developers have to go through is discovering that in most of the places they think they need a finalizer, they don't.) You've tagged this as (amongst other things) a WPF question, and I'd say it'd almost always be a mistake to put a finalizer on a UI object. (So even if you are in one of the unusual situations that turns out to require a finalizer, that work doesn't belong anywhere near code that concerns itself with WPF.)
For most of the scenarios in which finalizers seem like they might be useful, they turn out not to be, because by the time your finalizer runs, it's already too late for it to do anything useful.
For example it's usually a bad idea to try to do anything with any of the objects your object has a reference to, because by the time your finalizer runs, those objects may already have been finalized. (.NET makes no guarantees about the order in which finalizers run, so you simply have no way of knowing whether the objects you've got references to have been finalized.) It's bad idea to invoke a method on an object whose finalizer has already been run.
If you have some way of knowing that some object definitely hasn't been finalized, then it is safe to use it, but that's a pretty unusual situation to be in. (...unless the object in question has no finalizer, and makes use of no finalizable resources itself. But in that case, it's probably not an object you'd actually need to do anything to when your own object is going away.)
The main situation in which finalizers seem useful is interop: e.g., suppose you're using P/Invoke to call some unmanaged API, and that API returns you a handle. Perhaps there's some other API you need to call to close that handle. Since that's all unmanaged stuff, the .NET GC doesn't know what those handles are, and it's your job to make sure that they get cleaned up, at which point a finalizer is reasonable...except in practice, it's almost always best to use a SafeHandle for that scenario.
In practice, the only places I've found myself using finalizers have been a) experiments designed to investigate what the GC does, and b) diagnostic code designed to discover something about how particular objects are being used in a system. Neither kind of code should end up going into production.
So the answer to whether you need "to implement a finalizer in child class too or not" is: if you need to ask, then the answer is no.
As for whether to duplicate the flag...other answers are providing contradictory advice here. The main points are 1) you do need to call the base Dispose and 2) your Dispose needs to be idempotent. (I.e., it doesn't matter if it's called once, twice, 5 times, 100 times - it shouldn't complain if it's called more than once.) You're at liberty to implement that however you like - a boolean flag is one way, but I've often found that it's enough to set certain fields to null in my Dispose method, at which point that removes any need for a separate boolean flag - you can tell that Dispose was already called because you already set those fields to null.
A lot of the guidance out there on IDisposable is extremely unhelpful, because it addresses the situation where you need a finalizer, but that's actually a very unusual case. It means that lots of people write a IDisposable implementations that are far more complex than necessary. In practice, most classes call into the category Stephen Cleary calls "level 1" in the article that jpierson linked to. And for these, you don't need all the GC.KeepAlive, GC.SuppressFinalize, and Dispose(bool) stuff that clutters up most of the examples. Life's actually much simpler most of the time, as Cleary's advice for these "level 1" types shows.
Duplicate is needed
If you don't have any clean-up in child class simply call base.Dispose() and if there are some class level clean-up, do it after a call to base.Dispose(). You need to separate state of these two classes so there should be a IsDisposed boolean for each class. This way you can add clean-up code whenever you need.
When you determine a class as IDisposable, you simply tell GC I'm taking care of it's clean-up procedure and you should SuppressFinilize on this class so GC would remove it from it's queue. Unless you call GC.SupressFinalize(this) nothing happens special to an IDisposable class. So if you implement it as I mentioned there's no need for a Finilizer since you just told GC not to finalize it.
The correct way to implement IDisposable depends on whether you have any unmanaged resources owned by your class. The exact way to implement IDisposable is still something not all developers agree on and some like Stephen Cleary have strong opinions on the disposable paradigm in general.
see: Implementing Finalize and Dispose to Clean Up Unmanaged Resources
The documentation for IDisposable interface also explains this breifly and this article points out some of the same things but also on MSDN.
As far as whether a duplicate boolean field "isDisposed" is required in the base class. It appears that this is mainly just a useful convention that can be used when a subclass itself may add additional unmanaged resources that need to be disposed of. Since Dispose is declared virtual, calling Dispose on a subclass instance always causes that class's Dispose method to be called first which in turn calls base.Dispose as it's last step giving a chance to clear up each level in the inheritance hierarchy. So I would probably summarize this as, if your subclass has additional unmanaged resources above what is owned by the base then you will probably be best to have your own boolean isDisposed field to track it's disposal in a transactional nature inside it's Dispose method but as Ian mentions in his answer, there are other ways to represent an already-disposed state.
1) No need to duplicate
2) Implementing a finalizer will help to dispose items that are not explicitly disposed. But is not guaranteed. It is a good practice to do.
Only implement a finalizer if an object holds information about stuff needing cleanup, and this information is in some form other than Object references to other objects needing cleanup (e.g. a file handle stored as an Int32). If a class implements a finalizer, it should not hold strong Object references to any other objects which are not required for cleanup. If it would hold other references, the portion responsible for cleanup should be split off into its own object with a finalizer, and the main object should hold a reference to that. The main object should then not have a finalizer.
Derived classes should only have finalizers if the purpose of the base class was to support one. If the purpose of a class doesn't center around a finalizer, there's not much point allowing a derived class to add one, since derived classes almost certainly shouldn't (even if they need to add unmanaged resources, they should put the resources in their own class and just hold a reference to it).

Usages of object resurrection

I have a problem with memory leaks in my .NET Windows service application. So I've started to read articles about memory management in .NET. And i have found an interesting practice in one of Jeffrey Richter articles. This practice name is "object resurrection". It looks like situating code that initializes global or static variable to "this":
protected override void Finalize() {
Application.ObjHolder = this;
GC.ReRegisterForFinalize(this);
}
I understand that this is a bad practice, however I would like to know patterns that uses this practice. If you know any, please write here.
From the same article: "There are very few good uses of resurrection, and you really should avoid it if possible."
The best use I can think of is a "recycling" pattern. Consider a Factory that produces expensive, practically immutable objects; for instance, objects instantiated by parsing a data file, or by reflecting an assembly, or deeply copying a "master" object graph. The results are unlikely to change each time you perform this expensive process. It is in your best interest to avoid instantiation from scratch; however, for some design reasons, the system must be able to create many instances (no singletons), and your consumers cannot know about the Factory so that they can "return" the object themselves; they may have the object injected, or be given a factory method delegate from which they obtain a reference. When the dependent class goes out of scope, normally the instance would as well.
A possible answer is to override Finalize(), clean up any mutable state portion of the instance, and then as long as the Factory is in scope, reattach the instance to some member of the Factory. This allows the garbage-collection process to, in effect, "recycle" the valuable portion of these objects when they would otherwise go out of scope and be totally destroyed. The Factory can look and see if it has any recycled objects available in it's "bin", and if so, it can polish it up and hand it out. The factory would only have to instantiate a new copy of the object if the number of total objects in use by the process increased.
Other possible uses may include some highly specialized logger or audit implementation, where objects you wish to process after their death will attach themselves to a work queue managed by this process. After the process handles them, they can be totally destroyed.
In general, if you want dependents to THINK they're getting rid of an object, or to not have to bother, but you want to keep the instance, resurrection may be a good tool, but you'll have to watch it VERY carefully to avoid situations in which objects receiving resurrected references become "pack rats" and keep every instance that has ever been created in memory for the lifetime of the process.
Speculative: In a Pool situation, like the ConnectionPool.
You might use it to reclaim objects that were not properly disposed but to which the application code no longer holds a reference. You can't keep them in a List in the Pool because that would block GC collection.
A brother of mine worked on a high-performance simulation platform once. He related to me how that in the application, object construction was a demonstrable bottleneck to the application performance. It would seem the objects were large and required some significant processing to initialize.
They implemented an object repository to contain "retired" object instances. Before constructing a new object they would first check to see if one already existed in the repository.
The trade-off was increased memory consumption (as there might exist many unused objects at a time) for increased performance (as the total number of object constructions were reduced).
Note that the decision to implement this pattern was based on the bottlenecks they observed through profiling in their specific scenario. I would expect this to be an exceptional circumstance.
The only place I can think of using this, potentially, would be when you were trying to cleanup a resource, and the resource cleanup failed. If it was critical to retry the cleanup process, you could, technically, "ReRegister" the object to be finalized, which hopefully would succeed, the second time.
That being said, I'd avoid this altogether in practice.
For what I know .net calls finalizers in no specific order. If your class contains references to other objects they could have been finalized (and hence Disposed) when your finalizer is called. If you then decide to resurrect your object you will have references to finalized/disposed objects.
class A {
static Set<A> resurectedA = new Set<A>();
B b = new B();
~A() {
//will not die. keep a reference in resurectedA.
resurectedA.Add(this);
GC.ReRegisterForFinalize(this);
//at this point you may have a problem. By resurrecting this you are resurrecting b and b's Finalize may have already been called.
}
}
class B : IDisposable {
//regular IDisposable/Destructor pattern http://msdn.microsoft.com/en-us/library/b1yfkh5e(v=vs.110).aspx
}

Categories

Resources