I want to know, should i dispose a Graphic object before reusing it?
meaning i replace it´s value:
graphic = "createGraphic"
something like that, should i dispose before that?
here is an example code where i use it:
gmp.DrawImage(newImage, 0, 0);
if (newImage.Size != panelm.Size)
{
panelm.Invoke((MethodInvoker)delegate { panelm.Size = newImage.Size; });
this.Invoke((MethodInvoker)delegate { this.Size = newImage.Size; });
gmp.Dispose();
gmp = panelm.CreateGraphics();
};
So, this is in a while loop, before the while, i make gmp inherit panelm.
But, i never dispose of it in the loop, i just reuse it all the time, Except, if the sizes don´t match.
Then i need to recreate it (else it´s to big/small).
But now the thing is, should i dispose before, or should i just use creategraphic?
Also, the problem here is. I can´t use, "Using" on gmp. Cause if i do, i only have 2 possibilities.
1: create it before the while loop, and reuse it until the while loop ends ( meaning, i can never change it).
2: create it inside the while loop, (meaning it will be recreated every loop which i think will be a waste).
Thanks
So you're asking if you should call Dispose() on it before you give it a new value?
Graphics gmp = panelm.CreateGraphics();
//do work
gmp.Dispose();
gmp = panelm.CreateGraphics();
versus
Graphics gmp = panelm.CreateGraphics();
//do work
gmp = panelm.CreateGraphics();
As good practice, you should call Dispose() when you're done; although it will automatically get cleaned up by the garbage collector sometime if you don't, so you're not leaking resources either way.
Reference-type variables don't hold objects, but instead identify them; it may be helpful to think of them as object IDs. If a call to CreateGraphics returns "Object #4872", then it is necessary to ensure that something will call Dispose on Object #4872. If someVariable happens to hold "Object #4872", then saying someVariable.Dispose won't actually do anything to someVariable, but will instead call Dispose on object #4872. If the code were to overwrite the variable before calling Dispose, it would have no way of knowing which object needed to have its Dispose method called.
Related
Should if statements be used to assist in the stack's memory de-allocation?
Example A:
var objectHolder = new ObjectHolder();
if (true)
{
List<DefinedObject> objectList;
using (var sr = new GenericStreamReader<DefinedObject>())
{
objectList= sr.Get().ToList();
}
if (true)
{
var DOF = new DefinedObjectFactory();
objectHolder.DefinedObjects = DOF.DefineObjects(objectList);
}
}
//example endpoint
Example B:
var objectHolder = new ObjectHolder();
List<DefinedObject> objectList;
using (var sr = new GenericStreamReader<DefinedObject>())
{
objectList= sr.Get().ToList();
}
var DOF = new DefinedObjectFactory();
objectHolder.DefinedObjects = DOF.DefineObjects(objectList);
//example endpoint
Will Example A have a lighter footprint on the stack when example endpoint is reached versus when example endpoint is reached in Example B??
First off, the whole point of a stack-based allocation system is precisely that you do not need to optimize it in any way. Don't worry about it. The jitter is perfectly capable of realizing that a local will never be read or written again, and re-using its storage if it feels that's the best thing to do. Let the jitter do its job; it doesn't need your help. (*)
Rather, write your program so that local variables make sense to the reader. That's what you should be optimizing for.
Finally, there is never a need to say "if (true) { }" to introduce a new scope. Just introduce a new scope. It is perfectly legal to say:
void M()
{
{ // new scope
}
{ // another one
}
}
(*) There is a situation where the jitter needs your help, and that is the situation where a local refers to an object on the heap that contains a resource that is going to be used by unmanaged code. The jitter does not know that unmanaged code is going to use the object's resources, and might decide that no one is using this object any more and clean it up early. The finalizer of the object might then release the resource on the finalizer thread while the unmanaged code is using the resource! An object is not guaranteed to stay alive just because a local variable is holding onto it. If the local variable is never read from again then the jitter can re-use its storage and tell the garbage collector that it is safe to collect the referred-to object, which will then potentially crash the unmanaged code. You can use KeepAlive to hint to the jitter that a particular object needs to remain alive and not be optimized away.
if(true) will be compiled out in optimized build (the only build where variable lifespan is shorter than whole method) - so there is absolutely no difference between two versions you've suggested.
Based on the usage, I'm assuming DefinedObjectFactory is a class, not a struct. Therefore, the only thing that's on the stack is a reference to DefinedObjectFactory. The actual object is on the heap, and is controlled by the garbage collector.
The only stack space you're potentially saving is the space for a single pointer, so it's not worth it.
Even if this does make some difference, I think it's very likely that you're worrying about the wrong things. Is stack space allocation really an issue for your app?
In general, doing "clever" things in code for the sake of micro-optimizations is usually not worth it. It's usually a much better idea to write your code in the most clean and straightforward way possible. After doing that, if you find you actually have some perf/scalability problems (based on doing actual measurements), you can choose to rewrite/optimize the parts that are bottlenecks.
Most times you'll find that the clean/straightforward/readable version of the code performs just fine. And if it doesn't, the problems are probably not in the places you thought they were.
There's only one thing you can guarantee about de-allocation and garbage collection - it will happen at some stage.
As other people have said, the only thing your if(true) will achieve is being optimised out by the Jitter.
You're already using a using(..) { } pattern so instead of using the if(true) block I'd refactor your code to support this:
if (true)
{
var DOF = new DefinedObjectFactory();
objectHolder.DefinedObjects = DOF.DefineObjects(objectList);
}
To:
using (var DOF = new DefinedObjectFactory())
{
objectHolder.DefinedObjects = DOF.DefineObjects(objectList);
}
And see if that helps.
There is one otherthing that you can try but it's absolutely not recommended for production code as you shouldn't pre-empt the memory manager, but that is simply add a call to
GC.Collect();
When you exit your blocks.
I don't think it'll help but it might demonstrate to you why it's generally not worth worrying about scope and de-allocation.
We're dealing with the GC being too quick in a .Net program.
Because we use a class with native resources and we do not call GC.KeepAlive(), the GC collects the object before the Native access ends. As a result the program crashes.
We have exactly the problem as described here:
Does the .NET garbage collector perform predictive analysis of code?
Like so:
{ var img = new ImageWithNativePtr();
IntPtr p = img.GetData();
// DANGER!
ProcessData(p);
}
The point is: The JIT generates information that shows the GC that img is not used at the point when GetData() runs. If a GC-Thread comes at the right time, it collects img and the program crashes. One can solve this by appending GC.KeepAlive(img);
Unfortunately there is already too much code written (at too many places) to rectify the issue easily.
Therefore: Is there for example an Attribute (i.e. for ImageWithNativePtr) to make the JIT behave like in a Debug build? In a Debug build, the variable img will remain valid until the end of the scope ( } ), while in Release it looses validity at the comment DANGER.
To the best of my knowledge there is no way to control jitter's behavior based on what types a method references. You might be able to attribute the method itself, but that won't fill your order. This being so, you should bite the bullet and rewrite the code. GC.KeepAlive is one option. Another is to make GetData return a safe handle which will contain a reference to the object, and have ProcessData accept the handle instead of IntPtr — that's good practice anyway. GC will then keep the safe handle around until the method returns. If most of your code has var instead of IntPtr as in your code fragment, you might even get away without modifying each method.
You have a few options.
(Requires work, more correct) - Implement IDisposable on your ImageWithNativePtr class as it compiles down to try { ... } finally { object.Dispose() }, which will keep the object alive provided you update your code with usings. You can ease the pain of doing this by installing something like CodeRush (even the free Xpress supports this) - which supports creating using blocks.
(Easier, not correct, more complicated build) - Use Post Sharp or Mono.Cecil and create your own attribute. Typically this attribute would cause GC.KeepAlive() to be inserted into these methods.
The CLR has nothing built in for this functionality.
I believe you can emulate what you want with a container that implements IDispose, and a using statement. The using statement allows for defining the scope and you can place anything you want in it that needs to be alive over that scope. This can be a helpful mechanism if you have no control over the implementation of ImageWithNativePtr.
The container of things to dispose is a useful idiom. Particularly when you really should be disposing of something ... which is probably the case with an image.
using(var keepAliveContainer = new KeepAliveContainer())
{
var img = new ImageWithNativePtr();
keepAliveContainer.Add(img);
IntPtr p = img.GetData();
ProcessData(p);
// anything added to the container is still referenced here.
}
I am using a 3rd-party object I didn't create that over time consumes a lot of resources. This object shouldn't in any way contain a state, it simply performs a calculation. Despite this fact, everytime I call a specific function of this object a little more memory is consumed. A few hours later, and my program is sitting at gigabytes of allocated memory.
The object was origionaly initialized as a static member of my Program class in my command-line application. I have found that if I wrap my entire program in an class, and reinitialize it every now and again, the older (and bloated) object is unallocated by GC and a new smaller object replaces it.
My issue is this method is quite clumsy and ruins the flow of my Program.
Is there any other way you can dispose of an object? I am lead to believe GC.Collect() will only dispose unreachable code. Is there anyway I can make an object 'unreachable'?
Edit: As requested, the code:
static ILexicon lexicon = new Lexicon();
...
lexicon.LoadDataFromFile(#"lexicon.dat", null);
...
byte similarityScore(string w1, string w2, PartOfSpeech pos, SimilarityMeasure measure)
{
if (w1 == w2)
return 255;
if (pos != PartOfSpeech.Noun && pos != PartOfSpeech.Verb)
return 0;
IList<ILemma> w1_lemmas = lexicon.FindSenses(w1, pos);
IList<ILemma> w2_lemmas = lexicon.FindSenses(w2, pos);
byte result;
byte score = 0;
foreach (ILemma w1_lemma in w1_lemmas)
{
foreach (ILemma w2_lemma in w2_lemmas)
{
result = (byte) (w1_lemma.GetSimilarity(w2_lemma, measure) * 255);
if (result > score)
score = result;
}
}
return score;
}
As similarityScore is called, more memory is allocated to a private member of lexicon. It does not implement IDisposable and there are no obvious functions to clear the memory. The library is based on WordNet, and uses an algorithm to find path lengths in the hypernym tree to calculate the similarity of two words. Unless there is caching, I can't see why it would need to store any memory. What is for sure, is I can't change it. I'm almost certain there is nothing wrong with my code. I just need to dispose of lexicon when it gets too large (N.B. it takes a second or two to load the lexicon from file to memory)
If the object doesn't implement IDisposable and you want to push it out of scope you can set all references to it to null and then the force garbage collection with GC.Collect().
GC.Collect() is very expensive. If you're going to have to do this frequently, you might want to consider contacting the vendor.
Find out:
If you are using their library correctly, or is there something you're doing wrong that's causing the memory leak.
If their library is leaking memory even when used as intended, can they fix the leak?
Additional note: If the 3rd party library is native and you're having to use interop, you can use Marshal.ReleaseComObject to free unmanaged memory.
you could try calling the Dispose() method. This would make the object unusable, so you would have to instantiate another one. I assume your program is in a loop, so it can be a loop variable with the call to dispose at the bottom.
I would suggest that if you can get your hands on a memory profiler, you use it. A memory profiler will let you pause your program, click on a class, and and see a list of objects of that class. One can then click on an object and see how it was created, and the "path" to that object from a root (e.g. there's a static class foo, which holds a reference to a bar, which holds a reference to a boz, which holds a reference to a reallybigthing). Often, seeing that will make it clear what needs to be done to break the chain.
you might be able to download the source from wordnet repository and modify the code since it is an opensource.
Working on a number of legacy systems written in various versions of .NET, across many different companies, I keep finding examples of the following pattern:
public void FooBar()
{
object foo = null;
object bar = null;
try
{
foo = new object();
bar = new object();
// Code which throws exception.
}
finally
{
// Destroying objects
foo = null;
bar = null;
}
}
To anybody that knows how memory management works in .NET, this kind of code is painfully unnecessary; the garbage collector does not need you to manually assign null to tell that the old object can be collected, nor does assigning null instructs the GC to immediately collected the object.
This pattern is just noise, making it harder to understand what the code is trying to achieve.
Why, then, do I keep finding this pattern? Is there a school that teaches this practice? Is there a language in which assigning null values to locally scoped variables is required to correctly manage memory? Is there some additional value in explicitly assigning null that I haven't percieved?
It's FUDcargo cult programming (thanks to Daniel Earwicker) by developers who are used to "free" resources, bad GC implementations and bad API.
Some GCs didn't cope well with circular references. To get rid of them, you had to break the cycle "somewhere". Where? Well, if in doubt, then everywhere. Do that for a year and it's moved into your fingertips.
Also setting the field to null gives you the idea of "doing something" because as developers, we always fear "to forget something".
Lastly, we have APIs which must be closed explicitly because there is no real language support to say "close this when I'm done with it" and let the computer figure it out just like with GC. So you have an API where you have to call cleanup code and API where you don't. This sucks and encourages patterns like the above.
It is possible that it came from VB which used a reference counting strategy for memory management and object lifetime. Setting a reference to Nothing (equivalent to null) would decrement the reference count. Once that count became zero then the object was destroyed synchronously. The count would be decremented automatically upon leaving the scope of a method so even in VB this technique was mostly useless, however there were special situations where you would want to greedily destroy an object as illustrated by the following code.
Public Sub Main()
Dim big As Variant
Set big = GetReallyBigObject()
Call big.DoSomething
Set big = Nothing
Call TimeConsumingOperation
Call ConsumeMoreMemory
End Sub
In the above code the object referenced by big would have lingered until the end without the call to Set big = Nothing. That may be undesirable if the other stuff in the method was a time consuming operation or generated more memory pressure.
It comes from C/C++ where explicitly made setting your pointers to null was the norm (to eliminate dangling pointers)
After calling free():
#include <stdlib.h>
{
char *dp = malloc ( A_CONST );
// Now that we're freeing dp, it is a dangling pointer because it's pointing
// to freed memory
free ( dp );
// Set dp to NULL so it is no longer dangling
dp = NULL;
}
Classic VB developers also did the same thing when writing their COM components to prevent memory leaks.
It is more common in languages with deterministic garbage collection and without RAII, such as the old Visual Basic, but even there it's unnecessary and there it was often necessary to break cyclic references. So possibly it really stems from bad C++ programmers who use dumb pointers all over the place. In C++, it makes sense to set dumb pointers to 0 after deleting them to prevent double deletion.
I've seen this a lot in VBScript code (classic ASP) and I think it comes from there.
I think it used to be a common misunderstanding among former C/C++ developers. They knew that the GC will free their memory, but they didn't really understand when and how. Just clean it and carry on :)
I suspect that this pattern comes from translating C++ code to C# without pausing to understand the differences between C# finalization and C++ finalization. In C++ I often null things out in the destructor, either for debugging purposes (so that you can see in the debugger that the reference is no longer valid) or, rarely, because I want a smart object to be released. (If that's the meaning I'd rather call Release on it and make the meaning of the code crystal-clear to the maintainers.) As you note, this is pretty much senseless in C#.
You see this pattern in VB/VBScript all the time too, for different reasons. I mused a bit about what might cause that here:
Link
May be the convention of assigning null originated from the fact that had foo been an instance variable instead of a local variable, you should remove the reference before GC can collect it. Someone slept during the first sentence and started nullifying all their variables; the crowd followed.
It comes from C/C++ where doing a free()/delete on an already released pointer could result in a crash while releasing a NULL-pointer simply did nothing.
This means that this construct (C++) will cause problems
void foo()
{
myclass *mc = new myclass(); // lets assume you really need new here
if (foo == bar)
{
delete mc;
}
delete mc;
}
while this will work
void foo()
{
myclass *mc = new myclass(); // lets assume you really need new here
if (foo == bar)
{
delete mc;
mc = NULL;
}
delete mc;
}
Conclusion: IT's totally unneccessary in C#, Java and just about any other garbage-collecting language.
Consider a slight modification:
public void FooBar()
{
object foo = null;
object bar = null;
try
{
foo = new object();
bar = new object();
// Code which throws exception.
}
finally
{
// Destroying objects
foo = null;
bar = null;
}
vavoom(foo,bar);
}
The author(s) may have wanted to ensure that the great Vavoom (*) did not get pointers to malformed objects if an exception was previously thrown and caught. Paranoia, resulting in defensive coding, is not necessarily a bad thing in this business.
(*) If you know who he is, you know.
VB developers had to dispose all of their objects, to try and mitigate the chance of a Memory leak. I can imagine this is where it has come from as VB devs migrated over to .NEt / c#
I can see it coming from either a misunderstanding of how the garbage collection works, or an effort to force the GC to kick in immediately - perhaps because the objects foo and bar are quite large.
I've seen this in some Java code before. It was used on a static variable to signal that the object should be destroyed.
It probably didn't originate from Java though, as using it for anything other than a static variable would also not make sense in Java.
It comes from C++ code especially smart pointers. In that case it's rougly equivalent to a .Dispose() in C#.
It's not a good practice, at most a developer's instinct. There is no real value by assigning null in C#, except may be helping the GC to break a circular reference.
Should you set all the objects to null (Nothing in VB.NET) once you have finished with them?
I understand that in .NET it is essential to dispose of any instances of objects that implement the IDisposable interface to release some resources although the object can still be something after it is disposed (hence the isDisposed property in forms), so I assume it can still reside in memory or at least in part?
I also know that when an object goes out of scope it is then marked for collection ready for the next pass of the garbage collector (although this may take time).
So with this in mind will setting it to null speed up the system releasing the memory as it does not have to work out that it is no longer in scope and are they any bad side effects?
MSDN articles never do this in examples and currently I do this as I cannot
see the harm. However I have come across a mixture of opinions so any comments are useful.
Karl is absolutely correct, there is no need to set objects to null after use. If an object implements IDisposable, just make sure you call IDisposable.Dispose() when you're done with that object (wrapped in a try..finally, or, a using() block). But even if you don't remember to call Dispose(), the finaliser method on the object should be calling Dispose() for you.
I thought this was a good treatment:
Digging into IDisposable
and this
Understanding IDisposable
There isn't any point in trying to second guess the GC and its management strategies because it's self tuning and opaque. There was a good discussion about the inner workings with Jeffrey Richter on Dot Net Rocks here: Jeffrey Richter on the Windows Memory Model and
Richters book CLR via C# chapter 20 has a great treatment:
Another reason to avoid setting objects to null when you are done with them is that it can actually keep them alive for longer.
e.g.
void foo()
{
var someType = new SomeType();
someType.DoSomething();
// someType is now eligible for garbage collection
// ... rest of method not using 'someType' ...
}
will allow the object referred by someType to be GC'd after the call to "DoSomething" but
void foo()
{
var someType = new SomeType();
someType.DoSomething();
// someType is NOT eligible for garbage collection yet
// because that variable is used at the end of the method
// ... rest of method not using 'someType' ...
someType = null;
}
may sometimes keep the object alive until the end of the method. The JIT will usually optimized away the assignment to null, so both bits of code end up being the same.
No don't null objects. You can check out https://web.archive.org/web/20160325050833/http://codebetter.com/karlseguin/2008/04/28/foundations-of-programming-pt-7-back-to-basics-memory/ for more information, but setting things to null won't do anything, except dirty your code.
Also:
using(SomeObject object = new SomeObject())
{
// do stuff with the object
}
// the object will be disposed of
In general, there's no need to null objects after use, but in some cases I find it's a good practice.
If an object implements IDisposable and is stored in a field, I think it's good to null it, just to avoid using the disposed object. The bugs of the following sort can be painful:
this.myField.Dispose();
// ... at some later time
this.myField.DoSomething();
It's good to null the field after disposing it, and get a NullPtrEx right at the line where the field is used again. Otherwise, you might run into some cryptic bug down the line (depending on exactly what DoSomething does).
Chances are that your code is not structured tightly enough if you feel the need to null variables.
There are a number of ways to limit the scope of a variable:
As mentioned by Steve Tranby
using(SomeObject object = new SomeObject())
{
// do stuff with the object
}
// the object will be disposed of
Similarly, you can simply use curly brackets:
{
// Declare the variable and use it
SomeObject object = new SomeObject()
}
// The variable is no longer available
I find that using curly brackets without any "heading" to really clean out the code and help make it more understandable.
In general no need to set to null. But suppose you have a Reset functionality in your class.
Then you might do, because you do not want to call dispose twice, since some of the Dispose may not be implemented correctly and throw System.ObjectDisposed exception.
private void Reset()
{
if(_dataset != null)
{
_dataset.Dispose();
_dataset = null;
}
//..More such member variables like oracle connection etc. _oraConnection
}
The only time you should set a variable to null is when the variable does not go out of scope and you no longer need the data associated with it. Otherwise there is no need.
this kind of "there is no need to set objects to null after use" is not entirely accurate. There are times you need to NULL the variable after disposing it.
Yes, you should ALWAYS call .Dispose() or .Close() on anything that has it when you are done. Be it file handles, database connections or disposable objects.
Separate from that is the very practical pattern of LazyLoad.
Say I have and instantiated ObjA of class A. Class A has a public property called PropB of class B.
Internally, PropB uses the private variable of _B and defaults to null. When PropB.Get() is used, it checks to see if _PropB is null and if it is, opens the resources needed to instantiate a B into _PropB. It then returns _PropB.
To my experience, this is a really useful trick.
Where the need to null comes in is if you reset or change A in some way that the contents of _PropB were the child of the previous values of A, you will need to Dispose AND null out _PropB so LazyLoad can reset to fetch the right value IF the code requires it.
If you only do _PropB.Dispose() and shortly after expect the null check for LazyLoad to succeed, it won't be null, and you'll be looking at stale data. In effect, you must null it after Dispose() just to be sure.
I sure wish it were otherwise, but I've got code right now exhibiting this behavior after a Dispose() on a _PropB and outside of the calling function that did the Dispose (and thus almost out of scope), the private prop still isn't null, and the stale data is still there.
Eventually, the disposed property will null out, but that's been non-deterministic from my perspective.
The core reason, as dbkk alludes is that the parent container (ObjA with PropB) is keeping the instance of _PropB in scope, despite the Dispose().
Stephen Cleary explains very well in this post: Should I Set Variables to Null to Assist Garbage Collection?
Says:
The Short Answer, for the Impatient
Yes, if the variable is a static field, or if you are writing an enumerable method (using yield return) or an asynchronous method (using async and await). Otherwise, no.
This means that in regular methods (non-enumerable and non-asynchronous), you do not set local variables, method parameters, or instance fields to null.
(Even if you’re implementing IDisposable.Dispose, you still should not set variables to null).
The important thing that we should consider is Static Fields.
Static fields are always root objects, so they are always considered “alive” by the garbage collector. If a static field references an object that is no longer needed, it should be set to null so that the garbage collector will treat it as eligible for collection.
Setting static fields to null is meaningless if the entire process is shutting down. The entire heap is about to be garbage collected at that point, including all the root objects.
Conclusion:
Static fields; that’s about it. Anything else is a waste of time.
There are some cases where it makes sense to null references. For instance, when you're writing a collection--like a priority queue--and by your contract, you shouldn't be keeping those objects alive for the client after the client has removed them from the queue.
But this sort of thing only matters in long lived collections. If the queue's not going to survive the end of the function it was created in, then it matters a whole lot less.
On a whole, you really shouldn't bother. Let the compiler and GC do their jobs so you can do yours.
Take a look at this article as well: http://www.codeproject.com/KB/cs/idisposable.aspx
For the most part, setting an object to null has no effect. The only time you should be sure to do so is if you are working with a "large object", which is one larger than 84K in size (such as bitmaps).
I believe by design of the GC implementors, you can't speed up GC with nullification. I'm sure they'd prefer you not worry yourself with how/when GC runs -- treat it like this ubiquitous Being protecting and watching over and out for you...(bows head down, raises fist to the sky)...
Personally, I often explicitly set variables to null when I'm done with them as a form of self documentation. I don't declare, use, then set to null later -- I null immediately after they're no longer needed. I'm saying, explicitly, "I'm officially done with you...be gone..."
Is nullifying necessary in a GC'd language? No. Is it helpful for the GC? Maybe yes, maybe no, don't know for certain, by design I really can't control it, and regardless of today's answer with this version or that, future GC implementations could change the answer beyond my control. Plus if/when nulling is optimized out it's little more than a fancy comment if you will.
I figure if it makes my intent clearer to the next poor fool who follows in my footsteps, and if it "might" potentially help GC sometimes, then it's worth it to me. Mostly it makes me feel tidy and clear, and Mongo likes to feel tidy and clear. :)
I look at it like this: Programming languages exist to let people give other people an idea of intent and a compiler a job request of what to do -- the compiler converts that request into a different language (sometimes several) for a CPU -- the CPU(s) could give a hoot what language you used, your tab settings, comments, stylistic emphases, variable names, etc. -- a CPU's all about the bit stream that tells it what registers and opcodes and memory locations to twiddle. Many things written in code don't convert into what's consumed by the CPU in the sequence we specified. Our C, C++, C#, Lisp, Babel, assembler or whatever is theory rather than reality, written as a statement of work. What you see is not what you get, yes, even in assembler language.
I do understand the mindset of "unnecessary things" (like blank lines) "are nothing but noise and clutter up code." That was me earlier in my career; I totally get that. At this juncture I lean toward that which makes code clearer. It's not like I'm adding even 50 lines of "noise" to my programs -- it's a few lines here or there.
There are exceptions to any rule. In scenarios with volatile memory, static memory, race conditions, singletons, usage of "stale" data and all that kind of rot, that's different: you NEED to manage your own memory, locking and nullifying as apropos because the memory is not part of the GC'd Universe -- hopefully everyone understands that. The rest of the time with GC'd languages it's a matter of style rather than necessity or a guaranteed performance boost.
At the end of the day make sure you understand what is eligible for GC and what's not; lock, dispose, and nullify appropriately; wax on, wax off; breathe in, breathe out; and for everything else I say: If it feels good, do it. Your mileage may vary...as it should...
I think setting something back to null is messy. Imagine a scenario where the item being set to now is exposed say via property. Now is somehow some piece of code accidentally uses this property after the item is disposed you will get a null reference exception which requires some investigation to figure out exactly what is going on.
I believe framework disposables will allows throw ObjectDisposedException which is more meaningful. Not setting these back to null would be better then for that reason.
Some object suppose the .dispose() method which forces the resource to be removed from memory.