I have a ASP.NET C# singleton (scoped to a session through HttpContext.Current.Session) which receives messages from code (warnings, errors, exceptions etc.). The messages are related to code problems, mainly used while debugging, not during production.
I wrote the custom destructor for this object so that its contents are written/appended as a file on a disk.
I wanted to ask two things related to this situation:
a] Is is a good idea to open a file and write to it during object destructor? The concurrency IO access is handled through static lock.
b] When is the destructor called for session scoped objects? Is it only when the session is expired on the server?
I also recommend using some existing logging package. If you do decide to do this yourself, and just to bare in mind for the future:
a) No it's not a good idea. You shouldn't access managed resources in the finalizer (destructor), so if you have some log strings in memory for example, it's bad practice to access them (or the list they are contained in) as they may have already been finalized themselves at this point.
I don't want to repeat the recommended pattern, so see https://stackoverflow.com/a/1943856/2586804
You'll see there is only one place you should access managed during Dispose and this is if it is called by user code and not be the GC. So this should help you come to the conclusion that to achieve this, you must call .Dispose() yourself (or by using a using) as when (and if) the GC does it, it cannot access the managed members that contain the log lines.
b) Don't know, but it doesn't matter as you cannot use finalizer for this purpose anyway.
The bottom line is you can't rely on GC to run code for you. It's bad practice because you don't know when it's going to happen, plus any reference to the object anywhere, now or in the future will prevent the object being collected and introduce a bug.
You also shouldn't get c# Finalizers/Destructors to run code because that's not what they are for, they are for freeing unmanaged resources so that the machine doesn't run out. And note, it's a rare occurrence to use them in C# because most peoples day-to-day work is all with managed objects.
Instead explicitly tell the object to write it's logs, a method called Flush would be a good name. Or just let it write one line at a time. This would be the usual behaviour.
To accomplish this implement IDisposable and then wrap instances of your logger in a using block which will guarantee to call the dispose method on your logger.
Related
Shoud I run Dispose before application exit?
For example, I create many objects and some of they have event subscribe:
var myObject=new MyClass();
myObject.OnEvent+=OnEventHandle;
And, for example, at my work i should use classes with IDisposable interface.
Then, I decide to close app and do this:
Enviroment.Exit(-1);
Am I right?
Should I call Dispose to all objects, wich implements IDisposable interface?
Can a memory leak occur?
P.S. This is server-side app, using WCF, MQ.
In this specific case, you may choose not to Dispose. I was sure I recollected a Raymond Chen analogy about not emptying the bins just before you have a building demolished.1
Your entire process is about to disappear. There's no need for you to do any cleanup of internal resources, since the OS is about to reclaim all of its resources.
However, you have to weigh this up against it a) appearing non-standard, b) potentially triggering warnings from e.g. stylecop, versus the expected reward in taking slightly less time to exit - do you really need to optimize this part of your application?
As others have commented, I'd usually choose to still wrap my disposable objects in usings, even though it may be strictly unnecessary in this case.
1This is the one about not doing anything in DLL_PROCESS_DETACH. The reasoning is similar.
We have constructors and we can treat them as contracts to follow for object instantiation.
There's no other way to create an instance without providing exact set of parameters to constructor.
But how can we (and should we ever bother) enforce some pre-mortem activity?
We've got the finalizer but they are not recommended for general purpose finalization.
We also have IDisposable to implement. But if we work with a disposable object without using we have no guarantee that Dispose will be ever called.
Why is there now way to enforce some state of the object before it will be let go off?
Tiding up in finalizer is impossible because there's no guarantee that object graph is intact and referenced object of dying object are not already nulled by GC.
Sure, not calling for instance object's SaveState() by the client code gives some troubles to it, not to my object.
Nonetheless it is considered to be a good practice to require all needed dependencies to be injected in constructor (if no default value is available). Nobody readily says: "leave default constructor, create properties and throw exceptions if object is in invalid state."
Update:
As there are many votes for closing the question I'd say that some design patterns for this can also be an answer.
Whether you use DI or not you can just count how many times object was requested/created. But without explicit release call you do not know the moment when you should call dispose.
I simply do not understand how to implement disposals at right time.
Why is there no way to enforce some state of the object before it will be let go of?
Because the whole point of a garbage collector is to simulate a machine that has an infinite amount of memory. If memory is infinite then you never need to clean it up.
You're conflating a semantic requirement of your program -- that a particular side effect occur at a particular time -- with the mechanisms of simulating infinite storage. In an ideal world those two things should not have anything to do with each other. Unfortunately we do not live in an ideal world; the existence of finalizers is evidence of that.
If there are effects that you want to achieve at a particular time, then those effects are part of your program and you should write code that achieves them. If they are important then they should be visible in the code so that people reading the code can see and understand them.
Unfortunately, during the design of Java, it was anticipated that the garbage collector should be able to satisfy all cleanup requirements. That seems to have been a belief during early design stages of .NET as well.
Consequently, no distinction is made between:
an object reference which encapsulates exclusive ownership of its target;
a reference to an object which does not encapsulate ownership(its target is owned by someone else);
a reference whose owner knows that it will either encapsulate exclusive ownership or encapsulate none, and knows which case applies for the instance at hand;
or an object reference which encapsulates shared ownership.
If a language and framework were properly designed around such distinctions, it would seldom be necessary to write code where proper cleanup could not be statically verified (the first two cases, which probably apply 90%+ of the time, could easily be statically verified even with the .NET framework).
Unfortunately, because no such distinctions exist outside the very limited context of using statements, there's no way for a compiler or verifier to know, when a piece of code abandons a reference, whether anything else is expecting to clean up the object referred to thereby.
Consequently, there's no way to know in general whether the object should be disposed, and no generally-meaningful way to squawk if it should be but isn't.
I have a problem with memory leaks in my .NET Windows service application. So I've started to read articles about memory management in .NET. And i have found an interesting practice in one of Jeffrey Richter articles. This practice name is "object resurrection". It looks like situating code that initializes global or static variable to "this":
protected override void Finalize() {
Application.ObjHolder = this;
GC.ReRegisterForFinalize(this);
}
I understand that this is a bad practice, however I would like to know patterns that uses this practice. If you know any, please write here.
From the same article: "There are very few good uses of resurrection, and you really should avoid it if possible."
The best use I can think of is a "recycling" pattern. Consider a Factory that produces expensive, practically immutable objects; for instance, objects instantiated by parsing a data file, or by reflecting an assembly, or deeply copying a "master" object graph. The results are unlikely to change each time you perform this expensive process. It is in your best interest to avoid instantiation from scratch; however, for some design reasons, the system must be able to create many instances (no singletons), and your consumers cannot know about the Factory so that they can "return" the object themselves; they may have the object injected, or be given a factory method delegate from which they obtain a reference. When the dependent class goes out of scope, normally the instance would as well.
A possible answer is to override Finalize(), clean up any mutable state portion of the instance, and then as long as the Factory is in scope, reattach the instance to some member of the Factory. This allows the garbage-collection process to, in effect, "recycle" the valuable portion of these objects when they would otherwise go out of scope and be totally destroyed. The Factory can look and see if it has any recycled objects available in it's "bin", and if so, it can polish it up and hand it out. The factory would only have to instantiate a new copy of the object if the number of total objects in use by the process increased.
Other possible uses may include some highly specialized logger or audit implementation, where objects you wish to process after their death will attach themselves to a work queue managed by this process. After the process handles them, they can be totally destroyed.
In general, if you want dependents to THINK they're getting rid of an object, or to not have to bother, but you want to keep the instance, resurrection may be a good tool, but you'll have to watch it VERY carefully to avoid situations in which objects receiving resurrected references become "pack rats" and keep every instance that has ever been created in memory for the lifetime of the process.
Speculative: In a Pool situation, like the ConnectionPool.
You might use it to reclaim objects that were not properly disposed but to which the application code no longer holds a reference. You can't keep them in a List in the Pool because that would block GC collection.
A brother of mine worked on a high-performance simulation platform once. He related to me how that in the application, object construction was a demonstrable bottleneck to the application performance. It would seem the objects were large and required some significant processing to initialize.
They implemented an object repository to contain "retired" object instances. Before constructing a new object they would first check to see if one already existed in the repository.
The trade-off was increased memory consumption (as there might exist many unused objects at a time) for increased performance (as the total number of object constructions were reduced).
Note that the decision to implement this pattern was based on the bottlenecks they observed through profiling in their specific scenario. I would expect this to be an exceptional circumstance.
The only place I can think of using this, potentially, would be when you were trying to cleanup a resource, and the resource cleanup failed. If it was critical to retry the cleanup process, you could, technically, "ReRegister" the object to be finalized, which hopefully would succeed, the second time.
That being said, I'd avoid this altogether in practice.
For what I know .net calls finalizers in no specific order. If your class contains references to other objects they could have been finalized (and hence Disposed) when your finalizer is called. If you then decide to resurrect your object you will have references to finalized/disposed objects.
class A {
static Set<A> resurectedA = new Set<A>();
B b = new B();
~A() {
//will not die. keep a reference in resurectedA.
resurectedA.Add(this);
GC.ReRegisterForFinalize(this);
//at this point you may have a problem. By resurrecting this you are resurrecting b and b's Finalize may have already been called.
}
}
class B : IDisposable {
//regular IDisposable/Destructor pattern http://msdn.microsoft.com/en-us/library/b1yfkh5e(v=vs.110).aspx
}
What's the purpose of implementing the IDisposable interface? I've seen some classes implementing it and I don't understand why.
If your class creates unmanaged resources, then you can implement IDisposable so that these resources will be cleaned up properly when the object is disposed of. You override Dispose and release them there.
When your classes makes use of some system resource, it's the class' responsibility to make sure the resource is freed too. By .Net design you're supposed to do that in the Dispose method of the class. The IDisposable interface marks that your class needs to free resource when it's no longer in use, and the Dispose method is made available so that users of your class can call it to free the consumed resources.
The IDisposable method is also essential if you want auto clean-up to work properly and want to use the using() statement.
As well as freeing unmanaged resources, objects can usefully perform some operation the moment they go out of scope. A useful example might be an timer object: such objects could print out the time elapsed since their construction in the Dispose() method. These objects could then be used to log the approximate time taken for some set of operations:
using(Timer tmr=new Timer("blah"))
{
// do whatever
}
This can be done manually, of course, but my feeling is that one should take advantage wherever possible of the compiler's ability to generate the right code automatically.
It all has to do with the garbage collection mechanism. Chris Sells describes garbage collection, finalizers, and the reason for the Dispose pattern (and the IDisposable interface) episode 10 of .NET Rocks! (starting about 34 minutes in).
Many objects manipulate other entities in ways that will cause problems if not cleaned up. These other entities may be almost anything, and they may be almost anywhere. As an example, a Socket object may ask another machine to open up a TCP connection. That other machine might not be capable of handling very many connections at once; indeed, it could be a web-equipped appliance that can only handle one connection at a time. If a program were to open a socket and simply forget about it, no other computer would be able to connect to the appliance unless or until the socket got closed (perhaps the appliance might close the socket itself after a few minutes of inactivity, but it would be useless until then).
If an object implements IDisposable, that means it has the knowledge and impetus required to perform necessary cleanup actions, and such actions need to be performed before such knowledge and impetus is lost. Calling IDisposable.Dispose will ensure that all such cleanup actions get carried out, whereupon the object may be safely abandoned.
Microsoft allows for objects to request protection from abandonment by registering a method called Finalize. If an object does so, the Finalize method will be called if the system detects that the object has been abandoned. Neither the object, nor any objects to which it holds direct or indirect references, will be erased from memory until the Finalize method has been given a chance to run. This provides something of a "backstop" in case an object is abandoned without being first Disposed. There are many traps, however, with objects that implement Finalize, since there's no guarantee as to when it will be called. Not only might an object be abandoned a long time before Finalize gets called, but if one isn't careful the system may actually call Finalize on an object while part of it is still in use. Dangerous stuff. It's far better to use Dispose properly.
I'd like to know when i should and shouldn't be wrapping things in a USING block.
From what I understand, the compiler translates it into a try/finally, where the finally calls Dispose() on the object.
I always use a USING around database connections and file access, but its more out of habit rather than a 100% understanding. I know you should explicity (or with a using) Dispose() objects which control resources, to ensure they are released instantly rather than whenever the CLR feels like it, but thats where my understanding breaks down.
Are IDisposables not disposed of when they go out of scope?
Do I only need to use a USING when my object makes use of Dispose to tidy itself up?
Thanks
Edit: I know there are a couple of other posts on the USING keyword, but I'm more interested in answers relating the the CLR and exactly whats going on internally
Andrew
No, IDisposable items are not disposed when they go out of scope. It is for precisely this reason that we need IDisposable - for deterministic cleanup.
They will eventually get garbage collected, and if there is a finalizer it will (maybe) be called - but that could be a long time in the future (not good for connection pools etc). Garbage collection is dependent on memory pressure - if nothing wants extra memory, there is no need to run a GC cycle.
Interestingly (perhaps) there are some cases where "using" is a pain - when the offending class throws an exception on Dispose() sometimes. WCF is an offender of this. I have discussed this topic (with a simple workaround) here.
Basically - if the class implements IDisposable, and you own an instance (i.e. you created it or whatever), it is your job to ensure that it gets disposed. That might mean via "using", or it might mean passing it to another piece of code that assumes responsibility.
I've actually seen debug code of the type:
#if DEBUG
~Foo() {
// complain loudly that smoebody forgot to dispose...
}
#endif
(where the Dispose calls GC.SuppressFinalize)
"Are IDisposables not disposed of when
they go out of scope?"
No. If the IDisposable object is finalizable, which is not the same thing, then it will be finalized when it's garbage collected.
Which might be soon or might be almost never.
Jeff Richter's C#/CLR book is very good on all this stuff, and the Framework Design Guidelines book is also useful.
Do I only need to use a USING when my
object makes use of Dispose to tidy
itself up?
You can only use 'using' when the object implements IDisposable. The compiler will object if you try to do otherwise.
To add to the other answers, you should use using (or an explicit Dispose) whenever an object holds any resources other than managed memory. Examples would be things like files, sockets, database connections, or even GDI drawing handles.
The garbage collector would eventually finalise these objects, but only at some unspecified time in the future. You can't rely on it happening in a timely fashion, and you might have run out of that resource in the meantime.