WeakReference understanding - c#

I want to create the dictionary of all the ViewModels.
public static Dictionary<string, WeakReference> vmCollection = new Dictionary<string, WeakReference>();
Adding it like this
vmCollection.Add(name, new WeakReference(viewModel));
And calling the required method like this..
((vmCollection[viewModel].Target) as BaseViewModel).NewMessage(message);
Do I need maintain it as a WeakReference? What could be the consequences if I don't maintain it as a WeakReference.

The only consequence of not using a WeakReference is that the reference in your dictionary will prevent the View Model instances from being garbage collected. A WeakReference allows garbage collection (assuming there are no other solid references elsewhere).
An item becomes eligible for garbage collection when it has no references to it. WeakReference does not create a "countable" reference, thus you can keep a sort-of-reference to it, but still let it be eligible if your WeakReference is the only thing left looking at it.
Whether you need it or not really depends on what sort of life-cycle your View Models have. If they need disposing or otherwise "letting go of", then you may need to use WeakReference or expose a way to remove the reference from the dictionary instead.
As I mention in the comments. I tend to err away from using WeakReference as opposed to handling the life-cycle of the relevant objects explicitly. That said, they are useful when you simply don't have visibility of the life-cycle at the relevant points. I think in your situation, you should have the necessary visibility, as these are all likely in the UI layer, and thus should try to not use them.
Here is a resource on the topic:
Weak References MSDN article
Guidelines extract from the above MSDN link:
Use long weak references only when necessary as the state of the
object is unpredictable after finalization.
Avoid using weak references to small objects because the pointer
itself may be as large or larger.
Avoid using weak references as an automatic solution to memory
management problems. Instead, develop an effective caching policy for
handling your application's objects.
I believe the last guideline point applies to your situation.

I took a slightly different approach.
For this example, I only have a single instance, but I'm sure it's fairly easily extendable for multiple instances...
So, on my class I create the following Action (it could be a Func if you need something returned). For my example, I'm just pushing an Exception around:
private static Action<Exception> StaticAccessorToInstanceMethod { get; set; }
And the instance method I want to call is:
public void HandleExceptionDetails(Exception e)
{
// Content of the method on the instance
}
I then have this in my constructor:
StaticAccessorToInstanceMethod = this.HandleExceptionDetails;
And the following in the destructor:
StaticAccessorToInstanceMethod = null;
(If you're dealing with multiple instances, then the constructor and destructor code would be a bit different).
Then the static method simply calls the instance method:
public static void HandleGeneralException(Exception ex)
{
StaticAccessorToInstanceMethod(result);
}
I've left out defensive logic.

Related

Is there a way to force a method to inherit a scope from it's caller?

If I have a method outside it's callers scope that does a few things, and I have to call this method multiple times in multiple places. Is there any way to make the entire scope of the caller available to the method without passing parameters and also without using global variables? Like if I need it to have an access to a List, an entity framework context
Instead of
myMethod(string _string, List<string> _stringList, EntityContext _db)
{
//log _string to a database table
//add _string to _stringList
//etc.
}
Is there a way I can just pass the _string and make the method inherit the scope as if I'm just writing the same three lines of code everywhere I call this method? It seems a lot cleaner to call myMethod("foo") than myMethod("foo", stringList, MyEntities).
I could create a class, instantiate it, and call the class, but I'm just plain curious if scope inheritance or scope passing is a thing.
Absolutely don't do that. If you have a context you need to pass, use a class to represent the context needed, but don't try to handwave it away and hide it. It makes for unmaintainable code full of interdependencies.
In fact, the "bother" or "overhead" of passing the context object around is a good thing: it points out that having dependencies between the elements of your software project is not free. If you think that writing out the extra parameter is "too much work", then you're missing the forest for the trees: the dependency thus introduced has a much higher mental overhead than the mere mechanics of typing an extra parameter. After you pass that context a few times, typing it will be second nature and have 0 real overhead. The typing is cheap and doesn't require thinking, but keeping in mind the dependency and how it figures in the design of the overall system is anything but.
So: if you are trying to argue that introducing the dependency is worth it, then you have to back it up with actions and actually pass the context object around. The real cost is in the dependency, not the typing. Otherwise, it's a case of "talk is cheap" :)
One way of decreasing the apparent "cost" of passing such context objects is to upset the balance and make the context object actually do something, besides just carrying data. You would then use the context object to manipulate the objects for you, instead of calling the methods on the objects. This sort of "inversion" is quite handy, and often results in better design. After all, the presence of the context indicates that there's an overarching common state, and that perhaps too much functionality is delegated to the "end object", making it intertwined with the common state, whereas it may make more sense in the context object, making the end object less dependent on the presence of any particular external state.
You'd want the context to have methods that require "seeing the big picture", i.e. being aware of the presence of multiple objects, whereas the "leaf objects" (the ones with myMethod) should have methods that don't require the context, or that are general enough not to force any particular context class.
In your case, myMethod perhaps instead of working directly on an EntityContext could generate a functor or a similar action-wrapping object that performs the action, and this could then be applied by the caller (e.g. the context) to execute the database action. This way later it'll be easier to centrally manage the queue of database operations, etc.
When I refactor large projects, this sort of a "context inversion" comes in handy often, and the need for such patterns is very common. Usually, as large projects grow, the "leaf classes" start up lean, and end up acquiring functionality that belongs at a higher level. This is why using good tooling to explore the history of the repository is an imperative, and it's equally important that the entire repository history is available, i.e. that it was properly imported to git. I personally use DeepGit to trace the history of the code I work on, and find such tool indispensable. DeepGit is free as in beer for any use, and if you're not using a tool with similar functionality, you're seriously missing out, I think.
The need to pass contexts around is usually the indicator that a higher level has to be designed and introduced, and the "leafs" then need to be slimmed down, their context-using functionality moved out into the higher level. A few years down the road yet another higher level ends up being needed, although there are projects so far gone that when you just refactor them to make sense of the code base, two or three additional layers make themselves apparent!
I know of 2 ways that can be done. Consider you have the following method:
static void myMethod(string _stringA, string _stringB, string _stringC)
{
Console.WriteLine($"{_stringA},{_stringB},{_stringC}");
}
Create an overload method in the class. For example you could create an overloaded method like:
static void myMethod(string _stringA)
{
myMethod(_stringA, "stringB", "stringC");
}
The second way (which I would not advice it) is doing it the functional way like Javascript does (by using delegates):
public delegate void MethodDelegate(string _string);
static MethodDelegate mMethod1;
static MethodDelegate mMethod2;
static void Main(string[] args)
{
mMethod1 = delegate (string s) { myMethod(s, "method1-str-a", "method1-str-b"); };
mMethod1("str1");
mMethod2 = delegate (string s) { myMethod(s, "method2-str-a", "method2-str-b"); };
mMethod2("str2");
}

Good or bad practice? Initializing objects in getter

I have a strange habit it seems... according to my co-worker at least. We've been working on a small project together. The way I wrote the classes is (simplified example):
[Serializable()]
public class Foo
{
public Foo()
{ }
private Bar _bar;
public Bar Bar
{
get
{
if (_bar == null)
_bar = new Bar();
return _bar;
}
set { _bar = value; }
}
}
So, basically, I only initialize any field when a getter is called and the field is still null. I figured this would reduce overload by not initializing any properties that aren't used anywhere.
ETA: The reason I did this is that my class has several properties that return an instance of another class, which in turn also have properties with yet more classes, and so on. Calling the constructor for the top class would subsequently call all constructors for all these classes, when they are not always all needed.
Are there any objections against this practice, other than personal preference?
UPDATE: I have considered the many differing opinions in regards to this question and I will stand by my accepted answer. However, I have now come to a much better understanding of the concept and I'm able to decide when to use it and when not.
Cons:
Thread safety issues
Not obeying a "setter" request when the value passed is null
Micro-optimizations
Exception handling should take place in a constructor
Need to check for null in class' code
Pros:
Micro-optimizations
Properties never return null
Delay or avoid loading "heavy" objects
Most of the cons are not applicable to my current library, however I would have to test to see if the "micro-optimizations" are actually optimizing anything at all.
LAST UPDATE:
Okay, I changed my answer. My original question was whether or not this is a good habit. And I'm now convinced that it's not. Maybe I will still use it in some parts of my current code, but not unconditionally and definitely not all the time. So I'm going to lose my habit and think about it before using it. Thanks everyone!
What you have here is a - naive - implementation of "lazy initialization".
Short answer:
Using lazy initialization unconditionally is not a good idea. It has its places but one has to take into consideration the impacts this solution has.
Background and explanation:
Concrete implementation:
Let's first look at your concrete sample and why I consider its implementation naive:
It violates the Principle of Least Surprise (POLS). When a value is assigned to a property, it is expected that this value is returned. In your implementation this is not the case for null:
foo.Bar = null;
Assert.Null(foo.Bar); // This will fail
It introduces quite some threading issues: Two callers of foo.Bar on different threads can potentially get two different instances of Bar and one of them will be without a connection to the Foo instance. Any changes made to that Bar instance are silently lost.
This is another case of a violation of POLS. When only the stored value of a property is accessed it is expected to be thread-safe. While you could argue that the class simply isn't thread-safe - including the getter of your property - you would have to document this properly as that's not the normal case. Furthermore the introduction of this issue is unnecessary as we will see shortly.
In general:
It's now time to look at lazy initialization in general:
Lazy initialization is usually used to delay the construction of objects that take a long time to be constructed or that take a lot of memory once fully constructed.
That is a very valid reason for using lazy initialization.
However, such properties normally don't have setters, which gets rid of the first issue pointed out above.
Furthermore, a thread-safe implementation would be used - like Lazy<T> - to avoid the second issue.
Even when considering these two points in the implementation of a lazy property, the following points are general problems of this pattern:
Construction of the object could be unsuccessful, resulting in an exception from a property getter. This is yet another violation of POLS and therefore should be avoided. Even the section on properties in the "Design Guidelines for Developing Class Libraries" explicitly states that property getters shouldn't throw exceptions:
Avoid throwing exceptions from property getters.
Property getters should be simple operations without any preconditions. If a getter might throw an exception, consider redesigning the property to be a method.
Automatic optimizations by the compiler are hurt, namely inlining and branch prediction. Please see Bill K's answer for a detailed explanation.
The conclusion of these points is the following:
For each single property that is implemented lazily, you should have considered these points.
That means, that it is a per-case decision and can't be taken as a general best practice.
This pattern has its place, but it is not a general best practice when implementing classes. It should not be used unconditionally, because of the reasons stated above.
In this section I want to discuss some of the points others have brought forward as arguments for using lazy initialization unconditionally:
Serialization:
EricJ states in one comment:
An object that may be serialized will not have it's contructor invoked when it is deserialized (depends on the serializer, but many common ones behave like this). Putting initialization code in the constructor means that you have to provide additional support for deserialization. This pattern avoids that special coding.
There are several problems with this argument:
Most objects never will be serialized. Adding some sort of support for it when it is not needed violates YAGNI.
When a class needs to support serialization there exist ways to enable it without a workaround that doesn't have anything to do with serialization at first glance.
Micro-optimization:
Your main argument is that you want to construct the objects only when someone actually accesses them. So you are actually talking about optimizing the memory usage.
I don't agree with this argument for the following reasons:
In most cases, a few more objects in memory have no impact whatsoever on anything. Modern computers have way enough memory. Without a case of actual problems confirmed by a profiler, this is pre-mature optimization and there are good reasons against it.
I acknowledge the fact that sometimes this kind of optimization is justified. But even in these cases lazy initialization doesn't seem to be the correct solution. There are two reasons speaking against it:
Lazy initialization potentially hurts performance. Maybe only marginally, but as Bill's answer showed, the impact is greater than one might think at first glance. So this approach basically trades performance versus memory.
If you have a design where it is a common use case to use only parts of the class, this hints at a problem with the design itself: The class in question most likely has more than one responsibility. The solution would be to split the class into several more focused classes.
It is a good design choice. Strongly recommended for library code or core classes.
It is called by some "lazy initialization" or "delayed initialization" and it is generally considered by all to be a good design choice.
First, if you initialize in the declaration of class level variables or constructor, then when your object is constructed, you have the overhead of creating a resource that may never be used.
Second, the resource only gets created if needed.
Third, you avoid garbage collecting an object that was not used.
Lastly, it is easier to handle initialization exceptions that may occur in the property then exceptions that occur during initialization of class level variables or the constructor.
There are exceptions to this rule.
Regarding the performance argument of the additional check for initialization in the "get" property, it is insignificant. Initializing and disposing an object is a more significant performance hit than a simple null pointer check with a jump.
Design Guidelines for Developing Class Libraries at http://msdn.microsoft.com/en-US/library/vstudio/ms229042.aspx
Regarding Lazy<T>
The generic Lazy<T> class was created exactly for what the poster wants, see Lazy Initialization at http://msdn.microsoft.com/en-us/library/dd997286(v=vs.100).aspx. If you have older versions of .NET, you have to use the code pattern illustrated in the question. This code pattern has become so common that Microsoft saw fit to include a class in the latest .NET libraries to make it easier to implement the pattern. In addition, if your implementation needs thread safety, then you have to add it.
Primitive Data Types and Simple Classes
Obvioulsy, you are not going to use lazy-initialization for primitive data type or simple class use like List<string>.
Before Commenting about Lazy
Lazy<T> was introduced in .NET 4.0, so please don't add yet another comment regarding this class.
Before Commenting about Micro-Optimizations
When you are building libraries, you must consider all optimizations. For instance, in the .NET classes you will see bit arrays used for Boolean class variables throughout the code to reduce memory consumption and memory fragmentation, just to name two "micro-optimizations".
Regarding User-Interfaces
You are not going to use lazy initialization for classes that are directly used by the user-interface. Last week I spent the better part of a day removing lazy loading of eight collections used in a view-model for combo-boxes. I have a LookupManager that handles lazy loading and caching of collections needed by any user-interface element.
"Setters"
I have never used a set-property ("setters") for any lazy loaded property. Therefore, you would never allow foo.Bar = null;. If you need to set Bar then I would create a method called SetBar(Bar value) and not use lazy-initialization
Collections
Class collection properties are always initialized when declared because they should never be null.
Complex Classes
Let me repeat this differently, you use lazy-initialization for complex classes. Which are usually, poorly designed classes.
Lastly
I never said to do this for all classes or in all cases. It is a bad habit.
Do you consider implementing such pattern using Lazy<T>?
In addition to easy creation of lazy-loaded objects, you get thread safety while the object is initialized:
http://msdn.microsoft.com/en-us/library/dd642331.aspx
As others said, you lazily-load objects if they're really resource-heavy or it takes some time to load them during object construction-time.
I think it depends on what you are initialising. I probably wouldn't do it for a list as the construction cost is quite small, so it can go in the constructor. But if it was a pre-populated list then I probably wouldn't until it was needed for the first time.
Basically, if the cost of construction outweighs the cost of doing an conditional check on each access then lazy create it. If not, do it in the constructor.
Lazy instantiation/initialization is a perfectly viable pattern. Keep in mind, though, that as a general rule consumers of your API do not expect getters and setters to take discernable time from the end user POV (or to fail).
The downside that I can see is that if you want to ask if Bars is null, it would never be, and you would be creating the list there.
I was just going to put a comment on Daniel's answer but I honestly don't think it goes far enough.
Although this is a very good pattern to use in certain situations (for instance, when the object is initialized from the database), it's a HORRIBLE habit to get into.
One of the best things about an object is that it offeres a secure, trusted environment. The very best case is if you make as many fields as possible "Final", filling them all in with the constructor. This makes your class quite bulletproof. Allowing fields to be changed through setters is a little less so, but not terrible. For instance:
class SafeClass
{
String name="";
Integer age=0;
public void setName(String newName)
{
assert(newName != null)
name=newName;
}// follow this pattern for age
...
public String toString() {
String s="Safe Class has name:"+name+" and age:"+age
}
}
With your pattern, the toString method would look like this:
if(name == null)
throw new IllegalStateException("SafeClass got into an illegal state! name is null")
if(age == null)
throw new IllegalStateException("SafeClass got into an illegal state! age is null")
public String toString() {
String s="Safe Class has name:"+name+" and age:"+age
}
Not only this, but you need null checks everywhere you might possibly use that object in your class (Outside your class is safe because of the null check in the getter, but you should be mostly using your classes members inside the class)
Also your class is perpetually in an uncertain state--for instance if you decided to make that class a hibernate class by adding a few annotations, how would you do it?
If you make any decision based on some micro-optomization without requirements and testing, it's almost certainly the wrong decision. In fact, there is a really really good chance that your pattern is actually slowing down the system even under the most ideal of circumstances because the if statement can cause a branch prediction failure on the CPU which will slow things down many many many more times than just assigning a value in the constructor unless the object you are creating is fairly complex or coming from a remote data source.
For an example of the brance prediction problem (which you are incurring repeatedly, nost just once), see the first answer to this awesome question: Why is it faster to process a sorted array than an unsorted array?
Let me just add one more point to many good points made by others...
The debugger will (by default) evaluate the properties when stepping through the code, which could potentially instantiate the Bar sooner than would normally happen by just executing the code. In other words, the mere act of debugging is changing the execution of the program.
This may or may not be a problem (depending on side-effects), but is something to be aware of.
Are you sure Foo should be instantiating anything at all?
To me it seems smelly (though not necessarily wrong) to let Foo instantiate anything at all. Unless it is Foo's express purpose to be a factory, it should not instantiate it's own collaborators, but instead get them injected in its constructor.
If however Foo's purpose of being is to create instances of type Bar, then I don't see anything wrong with doing it lazily.

Composition and interaction with owner instance

I was wondering what is the best practice for accessing the owner instance when using composition (not aggregation)
public class Manager
{
public List<ElementToManage> Listelmt;
public List<Filter> ListeFilters;
public void LoadState(){}
}
public class Filter
{
public ElementToManage instance1;
public ElementToManage instance2;
public object value1;
public object value2;
public LoadState()
{
//need to access the property Listelmt in the owner instance (manager instance)
//instance1 = Listelmt.SingleOrDefault(...
}
}
So far I'm thinking about two possibilities:
Keep a reference to the owner in the Filter instance.
Declare an event in the Filter class. The manager instance subscribe to it, and the filter throw it when needed.
I feel more like using the second possibility. It seems more OOP to me, and there is less dependencies between the classes ( any refactoring later will be easier),
But debugging and tracing may be a bit harder on the long run.
Regarding business layer classes, i don't remember seeing events for this purpose.
Any insight would be greatly appreciated
There is no concept of an "owner" of a class instance, there should not be any strong coupling between the Filter instance and the object that happens to have an instance of it.
That being the case an event seems appropriate: It allows for loose coupling while enabling the functionality you want. If you went with option #1 on the other hand you would limit the overall usefulness of the Filter class - now it can only be contained in Manager classes, I don't think that is what you would want.
Overall looking at your code you might want to pass in the relevant data the method LoadState operates on so it doesn't have to "reach out".
I recomend the reference to owner of filter instance. The event can be handled by more handlers and can change result of previous handler(s). And you propadly don't want change the owner during lifetime of Filter without notification the Filter instance.
My short answer : Neither.
First option to keep a reference to the owner is problematic for several reasons. Filter class no longer has a single responsibility. Filter and Manager are tightly coupled. etc.
Second option is only a little better, and yes I've used events in similar scenearios, it rarely if ever ends well.
It's difficult to give a definite advice without more specific details. Some thoughts:
1) Are you sure your classes are as they should be? Maybe there should be a class to compose a single ElementToManage and a single Filter ?
2) Who is responsible for creating a Filter? For example, if it is Manager, maybe the Manager can give the list as a construction parameter? Maybe you can create a FilterFactory class that does any needed initializations.
3) Who calls filter.LoadState()? Maybe the needed list could be passed as a parameter to the LoadState() method.
4) I frequently use an "Initialization Design Pattern" (my terminology) For example I'll have a BinaryTree where parent and child will point to each other. The Factory constructs the nodes in a plain state, and than calls an initialize method with other needed objects. The class becomes complicated because I probably need to ensure that an uninitialized object raises an error for every other usage, and need to ensure that an object is initialized only once, is initialized only through the Factory, etc. But if it works, it is usually the best solution, in my opinion.
5) I'm still trying to learn "Dependency Injection" and getting nowhere, I guess it may have something to do with your question. I wonder if someone will come with an answer involving Dependency Injection.

C#:: When to use events or a collection of objects derived from an event handling Interface?

I have what I think is a simple "problem" to which I have found a couple of solutions but I am not sure which way to go andn the best practice in C#.
I have a master object (say a singleton) instanciated once during the lifespan of the application. This "MasterClass" creates a bunch of new type of objects, say "SlaveClass" every time MasterClass.Instance.CreateSlaveObject is called.
This MasterClass also monitors some other object for status change, and when that happens, notifies the SlaveClass objects it created of the change. Seems simple enough.
Since I come from the native C++ world, the way I did it first it to have an interface
Interface IChangeEventListener
{
void ChangeHappened();
}
from which I derived "SlaveClass". Then in my "MasterClass" i have:
...
IList<IChangeEventListener> slaveList;
...
CreateSlaveObject
{
...
slaveList.Add(slave);
}
...
ChangeHappened()
{
...
foreach(var slave in slaveList)
{
slave.ChangeHappened();
}
}
And this works. But I kept wondering in the back of my mind if there is another (better) way of doing this. So I researched a bit more on the topic and saw the C# events.
So instead of maintaining a collection of slaves in the MasterClass, I would basically inject the MasterClass into the ctor of SlaveClass (or via a property) and let the SlaveClass object add it's ChangeHappened as an event handler. this would be illustrated:
...Master...
public delegate void ChangeHappenedDelegate(object sender, NewsInfoArgs args);
public event NewUpdateDelegate ChangeHappenedEvent;
....
public SlaveClass (MasterClass publisher) //inject publisher service
{
publisher.ChangeHappenedEvent += ChangeHappened;
}
But this seems to be like an un-necessary coupling between the Slave and the Master, but I like the elegance of the provided build-in event notification mechanism.
So should I keep my current code, or move to the event based approach (with publisher injection)? and why?
Or if you can propose an alternative solution I might have missed, I would appreciate that as well.
Well, in my mind, events and interfaces like you showed are two sides of the same coin (at least in the context you described it), but they're really two sides of this.
The way I think about events is that "I need to subscribe to your event because I need you to tell me when something happens to you".
Whereas the interface way is "I need to call a method on you to inform you that something happened to me".
It can sound like the same, but it differs in who is talking, in both cases it is your "masterclass" that is talking, and that makes all the difference.
Note that if your slave classes have a method available that would be suitable for calling when something happened in your master class, you don't need the slave class to contain the code to hook this up, you can just as easily do this in your CreateSlaveClass method:
SlaveClass sc = new SlaveClass();
ChangeHappenedEvent += sc.ChangeHappened;
return sc;
This will basically use the event system, but let the MasterClass code do all the wiring of the events.
Does the SlaveClass objects live as long as the singleton class? If not, then you need to handle the case when they become stale/no longer needed, as in the above case (basically in both of yours and mine), you're holding a reference to those objects in your MasterClass, and thus they will never be eligible for garbage collection, unless you forcibly remove those events or unregisters the interfaces.
To handle the problem with the SlaveClass not living as long as the MasterClass, you're going to run into the same coupling problem, as you also noted in the comment.
One way to "handle" (note the quotes) this could be to not really link directly to the correct method on the SlaveClass object, but instead create a wrapper object that internally will call this method. The benefit from this would be that the wrapper object could use a WeakReference object internally, so that once your SlaveClass object is eligible for garbage collection, it might be collected, and then the next time you try to call the right method on it, you would notice this, and thus you would have to clean up.
For instance, like this (and here I'm typing without the benefit of a Visual Studio intellisense and a compiler, please take the meaning of this code, and not the syntax (errors).)
public class WrapperClass
{
private WeakReference _Slave;
public WrapperClass(SlaveClass slave)
{
_Slave = new WeakReference(slave);
}
public WrapperClass.ChangeHappened()
{
Object o = _Slave.Target;
if (o != null)
((SlaveClass)o).ChangeHappened();
else
MasterClass.ChangeHappenedEvent -= ChangeHappened;
}
}
In your MasterClass, you would thus do something like this:
SlaveClass sc = new SlaveClass();
WrapperClass wc = new WrapperClass(sc);
ChangeHappenedEvent += wc.ChangeHappened;
return sc;
Once the SlaveClass object is collected, the next call (but not sooner than that) from your MasterClass to the event handlers to inform them of the change, all those wrappers that no longer has an object will be removed.
I think it's just a matter of personal preference... personnally I like to use events, because it fits better in .NET "philosophy".
In your case, if the MasterClass is a singleton, you don't need to pass it to the constructor of the SlaveClass, since it can be retrieved using the singleton property (or method) :
public SlaveClass ()
{
MasterClass.Instance.ChangeHappenedEvent += ChangeHappened;
}
As you appear to have a singleton instance of MasterClass, why not subscribe to MasterClass.Instance.ChangeHappenedEvent? It's still a tight-ish coupling, but relatively neat.
Although events would be the normal paradigm for exposing public subscribe/unsubscribe functionality, in many cases they're not the best paradigm when subscribe/unsubscribe functionality don't need to be exposed. In this scenario, the only things that are allowed to subscribe/unsubscribe the event are the ones that you yourself create, so there's no need for public subscribe/unsubscribe methods. Further, the master's knowledge of the slaves far exceeds the typical event publisher's knowledge of subscribers. Therefore, I favor having the master explicitly handle the connection/disconnection of subscriptions.
One feature I would probably add, though, which is made somewhat easier by the coupling between master and slave, would be a means of allowing slaves to be garbage-collected if all outside references are abandoned. The easiest way of doing this would probably be to have the master keep a list of 'WeakReference's to slaves. When it's necssary to notify slaves that something has happened, go through the list, derereference any WeakReference that's still alive, and notify the slave. If more than half the items in the list have been added since the last sweep and the list contains at least 250 or so items, copy all live references to a new list and replace the old one.
An alternative approach which would avoid having to dereference the WeakReference objects so often would be to have the public "slave" object be a wrapper to a private object, and have the public object override Finalize to let the private object know that no outside references to it exist. This would require adding an extra level of strong indirection for public accesses to the object, but it would avoid creating--even momentarily--strong references to abandoned objects.

Setting Objects to Null/Nothing after use in .NET

Should you set all the objects to null (Nothing in VB.NET) once you have finished with them?
I understand that in .NET it is essential to dispose of any instances of objects that implement the IDisposable interface to release some resources although the object can still be something after it is disposed (hence the isDisposed property in forms), so I assume it can still reside in memory or at least in part?
I also know that when an object goes out of scope it is then marked for collection ready for the next pass of the garbage collector (although this may take time).
So with this in mind will setting it to null speed up the system releasing the memory as it does not have to work out that it is no longer in scope and are they any bad side effects?
MSDN articles never do this in examples and currently I do this as I cannot
see the harm. However I have come across a mixture of opinions so any comments are useful.
Karl is absolutely correct, there is no need to set objects to null after use. If an object implements IDisposable, just make sure you call IDisposable.Dispose() when you're done with that object (wrapped in a try..finally, or, a using() block). But even if you don't remember to call Dispose(), the finaliser method on the object should be calling Dispose() for you.
I thought this was a good treatment:
Digging into IDisposable
and this
Understanding IDisposable
There isn't any point in trying to second guess the GC and its management strategies because it's self tuning and opaque. There was a good discussion about the inner workings with Jeffrey Richter on Dot Net Rocks here: Jeffrey Richter on the Windows Memory Model and
Richters book CLR via C# chapter 20 has a great treatment:
Another reason to avoid setting objects to null when you are done with them is that it can actually keep them alive for longer.
e.g.
void foo()
{
var someType = new SomeType();
someType.DoSomething();
// someType is now eligible for garbage collection
// ... rest of method not using 'someType' ...
}
will allow the object referred by someType to be GC'd after the call to "DoSomething" but
void foo()
{
var someType = new SomeType();
someType.DoSomething();
// someType is NOT eligible for garbage collection yet
// because that variable is used at the end of the method
// ... rest of method not using 'someType' ...
someType = null;
}
may sometimes keep the object alive until the end of the method. The JIT will usually optimized away the assignment to null, so both bits of code end up being the same.
No don't null objects. You can check out https://web.archive.org/web/20160325050833/http://codebetter.com/karlseguin/2008/04/28/foundations-of-programming-pt-7-back-to-basics-memory/ for more information, but setting things to null won't do anything, except dirty your code.
Also:
using(SomeObject object = new SomeObject())
{
// do stuff with the object
}
// the object will be disposed of
In general, there's no need to null objects after use, but in some cases I find it's a good practice.
If an object implements IDisposable and is stored in a field, I think it's good to null it, just to avoid using the disposed object. The bugs of the following sort can be painful:
this.myField.Dispose();
// ... at some later time
this.myField.DoSomething();
It's good to null the field after disposing it, and get a NullPtrEx right at the line where the field is used again. Otherwise, you might run into some cryptic bug down the line (depending on exactly what DoSomething does).
Chances are that your code is not structured tightly enough if you feel the need to null variables.
There are a number of ways to limit the scope of a variable:
As mentioned by Steve Tranby
using(SomeObject object = new SomeObject())
{
// do stuff with the object
}
// the object will be disposed of
Similarly, you can simply use curly brackets:
{
// Declare the variable and use it
SomeObject object = new SomeObject()
}
// The variable is no longer available
I find that using curly brackets without any "heading" to really clean out the code and help make it more understandable.
In general no need to set to null. But suppose you have a Reset functionality in your class.
Then you might do, because you do not want to call dispose twice, since some of the Dispose may not be implemented correctly and throw System.ObjectDisposed exception.
private void Reset()
{
if(_dataset != null)
{
_dataset.Dispose();
_dataset = null;
}
//..More such member variables like oracle connection etc. _oraConnection
}
The only time you should set a variable to null is when the variable does not go out of scope and you no longer need the data associated with it. Otherwise there is no need.
this kind of "there is no need to set objects to null after use" is not entirely accurate. There are times you need to NULL the variable after disposing it.
Yes, you should ALWAYS call .Dispose() or .Close() on anything that has it when you are done. Be it file handles, database connections or disposable objects.
Separate from that is the very practical pattern of LazyLoad.
Say I have and instantiated ObjA of class A. Class A has a public property called PropB of class B.
Internally, PropB uses the private variable of _B and defaults to null. When PropB.Get() is used, it checks to see if _PropB is null and if it is, opens the resources needed to instantiate a B into _PropB. It then returns _PropB.
To my experience, this is a really useful trick.
Where the need to null comes in is if you reset or change A in some way that the contents of _PropB were the child of the previous values of A, you will need to Dispose AND null out _PropB so LazyLoad can reset to fetch the right value IF the code requires it.
If you only do _PropB.Dispose() and shortly after expect the null check for LazyLoad to succeed, it won't be null, and you'll be looking at stale data. In effect, you must null it after Dispose() just to be sure.
I sure wish it were otherwise, but I've got code right now exhibiting this behavior after a Dispose() on a _PropB and outside of the calling function that did the Dispose (and thus almost out of scope), the private prop still isn't null, and the stale data is still there.
Eventually, the disposed property will null out, but that's been non-deterministic from my perspective.
The core reason, as dbkk alludes is that the parent container (ObjA with PropB) is keeping the instance of _PropB in scope, despite the Dispose().
Stephen Cleary explains very well in this post: Should I Set Variables to Null to Assist Garbage Collection?
Says:
The Short Answer, for the Impatient
Yes, if the variable is a static field, or if you are writing an enumerable method (using yield return) or an asynchronous method (using async and await). Otherwise, no.
This means that in regular methods (non-enumerable and non-asynchronous), you do not set local variables, method parameters, or instance fields to null.
(Even if you’re implementing IDisposable.Dispose, you still should not set variables to null).
The important thing that we should consider is Static Fields.
Static fields are always root objects, so they are always considered “alive” by the garbage collector. If a static field references an object that is no longer needed, it should be set to null so that the garbage collector will treat it as eligible for collection.
Setting static fields to null is meaningless if the entire process is shutting down. The entire heap is about to be garbage collected at that point, including all the root objects.
Conclusion:
Static fields; that’s about it. Anything else is a waste of time.
There are some cases where it makes sense to null references. For instance, when you're writing a collection--like a priority queue--and by your contract, you shouldn't be keeping those objects alive for the client after the client has removed them from the queue.
But this sort of thing only matters in long lived collections. If the queue's not going to survive the end of the function it was created in, then it matters a whole lot less.
On a whole, you really shouldn't bother. Let the compiler and GC do their jobs so you can do yours.
Take a look at this article as well: http://www.codeproject.com/KB/cs/idisposable.aspx
For the most part, setting an object to null has no effect. The only time you should be sure to do so is if you are working with a "large object", which is one larger than 84K in size (such as bitmaps).
I believe by design of the GC implementors, you can't speed up GC with nullification. I'm sure they'd prefer you not worry yourself with how/when GC runs -- treat it like this ubiquitous Being protecting and watching over and out for you...(bows head down, raises fist to the sky)...
Personally, I often explicitly set variables to null when I'm done with them as a form of self documentation. I don't declare, use, then set to null later -- I null immediately after they're no longer needed. I'm saying, explicitly, "I'm officially done with you...be gone..."
Is nullifying necessary in a GC'd language? No. Is it helpful for the GC? Maybe yes, maybe no, don't know for certain, by design I really can't control it, and regardless of today's answer with this version or that, future GC implementations could change the answer beyond my control. Plus if/when nulling is optimized out it's little more than a fancy comment if you will.
I figure if it makes my intent clearer to the next poor fool who follows in my footsteps, and if it "might" potentially help GC sometimes, then it's worth it to me. Mostly it makes me feel tidy and clear, and Mongo likes to feel tidy and clear. :)
I look at it like this: Programming languages exist to let people give other people an idea of intent and a compiler a job request of what to do -- the compiler converts that request into a different language (sometimes several) for a CPU -- the CPU(s) could give a hoot what language you used, your tab settings, comments, stylistic emphases, variable names, etc. -- a CPU's all about the bit stream that tells it what registers and opcodes and memory locations to twiddle. Many things written in code don't convert into what's consumed by the CPU in the sequence we specified. Our C, C++, C#, Lisp, Babel, assembler or whatever is theory rather than reality, written as a statement of work. What you see is not what you get, yes, even in assembler language.
I do understand the mindset of "unnecessary things" (like blank lines) "are nothing but noise and clutter up code." That was me earlier in my career; I totally get that. At this juncture I lean toward that which makes code clearer. It's not like I'm adding even 50 lines of "noise" to my programs -- it's a few lines here or there.
There are exceptions to any rule. In scenarios with volatile memory, static memory, race conditions, singletons, usage of "stale" data and all that kind of rot, that's different: you NEED to manage your own memory, locking and nullifying as apropos because the memory is not part of the GC'd Universe -- hopefully everyone understands that. The rest of the time with GC'd languages it's a matter of style rather than necessity or a guaranteed performance boost.
At the end of the day make sure you understand what is eligible for GC and what's not; lock, dispose, and nullify appropriately; wax on, wax off; breathe in, breathe out; and for everything else I say: If it feels good, do it. Your mileage may vary...as it should...
I think setting something back to null is messy. Imagine a scenario where the item being set to now is exposed say via property. Now is somehow some piece of code accidentally uses this property after the item is disposed you will get a null reference exception which requires some investigation to figure out exactly what is going on.
I believe framework disposables will allows throw ObjectDisposedException which is more meaningful. Not setting these back to null would be better then for that reason.
Some object suppose the .dispose() method which forces the resource to be removed from memory.

Categories

Resources