I was wondering, why can't I overload '=' in C#? Can I get a better explanation?
Memory managed languages usually work with references rather than objects. When you define a class and its members you are defining the object behavior, but when you create a variable you are working with references to those objects.
Now, the operator = is applied to references, not objects. When you assign a reference to another you are actually making the receiving reference point to the same object that the other reference is.
Type var1 = new Type();
Type var2 = new Type();
var2 = var1;
In the code above, two objects are created on the heap, one referred by var1 and the other by var2. Now the last statement makes the var2 reference point to the same object that var1 is referring. After that line, the garbage collector can free the second object and there is only one object in memory. In the whole process, no operation is applied to the objects themselves.
Going back to why = cannot be overloaded, the system implementation is the only sensible thing you can do with references. You can overload operations that are applied to the objects, but not to references.
If you overloaded '=' you would never be able to change an object reference after it's been created.
... think about it - any call to theObjectWithOverloadedOperator=something inside the overloaded operator would result in another call to the overloaded operator... so what would the overloaded operator really be doing ? Maybe setting some other properties - or setting the value to a new object (immutability) ?
Generally not what '=' implies..
You can, however, override the implicit & explicit cast operators:
http://www.blackwasp.co.uk/CSharpConversionOverload.aspx
Because it doesn't really make sense to do so.
In C# = assigns an object reference to a variable. So it operates on variables and object references, not objects themselves. There is no point in overloading it depending on object type.
In C++ defining operator= makes sense for classes whose instances can be created e.g. on stack because the objects themselves are stored in variables, not references to them. So it makes sense to define how to perform such assignment. But even in C++, if you have set of polymorphic classes which are typically used via pointers or references, you usually explicitly forbid copying them like this by declaring operator= and copy constructor as private (or inheriting from boost::noncopyable), because of exactly the same reasons as why you don't redefine = in C#. Simply, if you have reference or pointer of class A, you don't really know whether it points to an instance of class A or class B which is a subclass of A. So do you really know how to perform = in this situation?
Actually, overloading operator = would make sense if you could define classes with value semantics and allocate objects of these classes in the stack. But, in C#, you can't.
One possible explanation is that you can't do proper reference updates if you overload assignment operator. It would literally screw up semantics because when people would be expecting references to update, your = operator may as well be doing something else entirely. Not very programmer friendly.
You can use implicit and explicit to/from conversion operators to mitigate some of the seeming shortcomings of not able to overload assignment.
I don't think there's any really particular single reason to point to. Generally, I think the idea goes like this:
If your object is a big, complicated object, doing something that isn't assignment with the = operator is probably misleading.
If your object is a small object, you may as well make it immutable and return new copies when performing operations on it, so that the assignment operator works the way you expect out of the box (as System.String does.)
You can overload assignment in C#. Just not on an entire object, only on members of it. You declare a property with a setter:
class Complex
{
public double Real
{
get { ... }
set { /* do something with value */ }
}
// more members
}
Now when you assign to Real, your own code runs.
The reason assignment to an object is not replaceable is because it is already defined by the language to mean something vitally important.
It's allowed in C++ and if not careful , it can result in a lot of confusion and bug hunting.
This article explains this in great detail.
http://www.relisoft.com/book/lang/project/14value.html
Because shooting oneself in the foot is frowned upon.
On a more serious note one can only hope you meant comparison rather than assignment. The framework makes elaborate provision for interfering with equality/equivalence evaluation, look for "compar" in help or online with msdn.
Being able to define special semantics for assignment operations would be useful, but only if such semantics could be applied to all situations where one storage location of a given type was copied to another. Although standard C++ implements such assignment rules, it has the luxury of requiring that all types be defined at compile time. Things get much more complicated when Reflection and and generics are added to the list.
Presently, the rules in .net specify that a storage location may be set to the default value for its type--regardless of what that type is--by zeroing out all the bytes. They further specify that any storage location can be copied to another of the same type by copying all the bytes. These rules apply to all types, including generics. Given two variables of type KeyValuePair<t1,t2>, the system can copy one to another without having to know anything but the size and alignment requirements of that type. If it were possible for t1, t2, or the type of any field within either of those types, to implement a copy constructor, code which copied one struct instance to another would have to be much more complicated.
That's not to say that such an ability offer some significant benefits--it's possible that, were a new framework being designed, the benefits of custom value assignment operators and default constructors would exceed the costs. The costs of implementation, however, would be substantial in a new framework, and likely insurmountable for an existing one.
This code is working for me:
public class Class1
{
...
public static implicit operator Class1(Class2 value)
{
Class1 result = new Class1();
result.property = value.prop;
return result;
}
}
Type of Overriding Assignment
There are two type to Override Assignment:
When you feel that user may miss something, and you want force user to use 'casting'
like float to integer, when you loss the floating value
int a = (int)5.4f;
When you want user to do that without even notice that s/he changing the object type
float f = 5;
How to Override Assignment
For 1, use of explicit keyword:
public static explicit override ToType(FromType from){
ToType to = new ToType();
to.FillFrom(from);
return to;
}
For 2, use of implicit keyword:
public static implicit override ToType(FromType from){
ToType to = new ToType();
to.FillFrom(from);
return to;
}
Update:
Note: that this implementation can take place in either the FromType or ToType class, depending on your need, there's no restriction, one of your class can hold all the conversions, and the other implements no code for this.
Related
I frequently find myself wanting to do something along these lines:
Form form = new Form();
form.ClientSize.Width = 500;
Of course the compiler will now complain that this code is not valid, since ClientSize is a property, and not a variable.
We can fix this by setting the ClientSize in its entirety:
form.ClientSize = new Size(500, ClientSize.Height);
Or, in general:
Size s = form.ClientSize;
s.Width = 500;
form.ClientSize = s; //only necessary if s is a value-type. (Right?)
But this is all looks unnecessary and obfuscated. Why can't the compiler do this for me? And of course, I'm asking about the general case, possibly involving even deeper levels of properties, not just the mundane example above
Basically, I'm asking why there is no syntactic sugar to translate the line form.ClientSize.Width = 500 into the above code. Is this simply a feature which hasn't yet been implemented, is it to avoid stacking of side effects from different getters and setters, to prevent confusion when one of the setters isn't defined, or is there a completely different reason why something like this doesn't exist?
Why can't the compiler do this for me?
It can. In fact, if you were programming in VB it would do exactly that. The C# compiler doesn't do this because it generally takes the philosophy of doing what you tell it to do; it is not a language that tries to guess at what you want to do and do that instead. If you tell it to do something silly, it'll just let you do something silly. Now this particular case is such a common source of bugs, and given these exact semantics is virtually certain to be a bug, so it does result in an error, because it's nice like that.
C# programmers learn to rely on the C# compiler never deciding to do something that you never told it to do, which can be a cause of confusion and problems when the compiler guess wrong about what you wanted to do.
I believe that you have a fundamental misunderstanding of .NET here. You can set properties of properties all day for class types because you're modifying the data of a reference without changing the reference. Take this code for example which compiles and runs fine:
class Program
{
public class Complex1
{
public Complex2 Complex2Property { get; set; }
}
public class Complex2
{
public int IntProperty { get; set; }
}
static void Main( string[] args )
{
// You must create instances of all properties to avoid a NullReferenceException
// prior to accessing said properties
var complex1 = new Complex1();
complex1.Complex2Property = new Complex2();
// Set property of property
complex1.Complex2Property.IntProperty = 7;
}
}
I assume your object is a struct or value type. The reason you can't do this for structs is that a struct is a value type - it gets copied around by value, not reference. So if I changed the above example to make Complex2 a struct, I could not do this line:
complex1.Complex2Property.IntProperty = 7;
Because the property is syntactic sugar for a get method which would return the struct by value which means a copy of the struct, not the same struct that the property holds. This means my change to that copy would not affect the original property's struct at all, accomplishing nothing except modifying a copy of my data that isn't the data in my property.
As for why the compiler doesn't do this for you? it definitely could, but it won't because you'd never want to actually do this. There's no value in modifying a copy of an object that you don't actually reassign to your property. This situation is a common error for developers who don't understand value vs reference types entirely (myself included!) and so the compiler chooses to warn you of your mistake.
For the compiler to allow myForm.ClientSize.Width = 500;, one of two things would be necessary: either the compiler would have to assume that the intended behavior is equivalent to:
var temp = myForm.ClientSize;
temp.Width = 500;
myForm.ClientSize = temp;
or else myForm would have to associate the name ClientSize with a method whose signature was:
void actupon_ClientSize<TParam>(ref Rectangle it, ref TParam param);
in which case the compiler could generate code similar to
myForm.actupon_ClientSize<int>((ref Rectangle r, ref int dummy)=>r.Width = 500, ref someDummyIntvar);
where someDummyIntVar would be an arbitrary value of type int [the second ref parameter would make it possible to pass parameters to the lambda without generating a closure]. If the Framework described a standard way for objects to properties to be exposed like that, it would make many types of programming safer and more convenient. Unfortunately, no such feature exists nor do I expect any future version of .NET to include it.
With regard to the first transformation, there are many cases where it would yield the desired effect, but also many where it would be unsafe. IMHO, there is no good reason why .NET shouldn't specify attributes which would indicate when various transformations are and are not safe, but they need for them has existed since Day One, and since the programmers responsible for .NET have consistently decided that they'd rather declare mutable structures to be "evil" than do anything that would make them be not evil, I doubt that will ever change either.
I have a struct which I put in a List<T>, I want to edit some value in that struct at a specific position. Is this at all possible without making a copy of the struct, editing the copy, and replacing the entry in the List?
No, to be able to do it you need reference to element of inner array which is not provided by List/IList.
You can do that with unsafe code and arrays if you have to.
From J.Richter's "CLR via C#", 3rd edition:
Value types should be immutable: that is, they should not define any
members that modify any of the type’s instance fields. In fact, I
recommended that value types have their fields marked as readonly so
that the compiler will issue errors should you accidentally write a
method that attempts to modify a field.
...
Just keep in mind that value types and reference types have very
different behaviors depending on how they’re used.
Consider this code:
public interface IChangeStruct
{
int Value { get; }
void Change(int value);
}
public struct MyStruct : IChangeStruct
{
int value;
public MyStruct(int _value)
{
value = _value;
}
public int Value
{
get
{
return value;
}
}
public void Change(int value)
{
this.value = value;
}
}
and it's usage:
static void Main(string[] args)
{
MyStruct[] l = new MyStruct[]
{
new MyStruct(0)
};
Console.WriteLine(l[0].Value);
l[0].Change(10);
Console.WriteLine(l[0].Value);
Console.ReadLine();
}
The output is:
0
10
So it does what you need.
However the same won't work for List<T>. I guess by the reason, mentioned by Alexei Levenkov. So, I would strongly recommend you to change struct to class if the type in question is not immutable per instance.
Your best bet is probably to have your structures expose their fields directly, and then use code like:
var temp = myList[3];
temp.X += 4;
myList[3] = temp;
I consider the failure of .net to provide any means of updating list items in place to be a significant weakness in .net, but would still consider an exposed-field struct as being far superior to any alternative in cases where one wishes to represent a small group of orthogonal values which should not be "attached" to any other such group (such as the coordinates in a point, the origin and size of a rectangle, etc.) The notion that structs should be "immutable" has been repeated as mantra for a long time, but that doesn't mean it's good advice. Such notion stems largely from two things:
Structs which modify `this` in any members outside their constructors are quirky. Such quirks used to (and to some extent still do) apply to property setters, but not to structs which simply expose their fields directly. Because Microsoft wrapped all struct fields in properties, this meant that while mutable structures could have had sensible semantics if they'd had exposed fields, they ended up with quirky semantics; Microsoft then blamed the quirky semantics on the fact that structs were mutable, rather than on the needless wrapping of fields with properties.
Some people like to model .net has only having one kind of object, as opposed to having value types and reference types as distinct kinds of entities. The behavior of so-called "immutable" value types is close enough to that of reference types that they can pretend they're the same thing, whereas the behavior of easily-mutable value types is vastly different. In reality, it's easier to understand the behavior of exposed-field value types than to understand all the corner cases where so-called "immutable" value types behave differently from reference types, and understanding the latter is impossible without understanding the former. Note that while value types may pretend to be immutable, there is in reality no such thing as an immutable value type. The only distinction is between those which can be mutated conveniently and those which can only be mutated awkwardly.
In reality, if a type is supposed to represent a small group of orthogonal values, an exposed-field struct is a perfect fit. Even if one has to use clunky code like that shown above to update a field of an item in a List<structType>, it's better than any alternative using class types or so-called "immutable" structs. Knowing that myList is a structure with an exposed field X would be enough to completely understand the code above. The only remotely decent alternative if one were using a class or "immutable" struct would be myList[3] = myList[3].WithX(myList[3].X + 4);, but that would require that the type in question to offer a WithX method (and presumably a WithWhatever() method for each field). Such methods would increase many fold the amount of code one would have to read to find out for certain what a method would actually do (one might expect that WithX would return a new instance which was identical to the old one except for the value of X, but one wouldn't know until one read all the code involved; by contrast, knowing that X is an exposed field of the structure type would be sufficient to know what the above code would do.
I'm having a hard time understanding when to use Object (boxing/unboxing) vs when to use generics.
For example:
public class Stack
{
int position;
object[] data = new object[10];
public void Push (object o) { data[position++] = o; }
public object Pop() { return data[--position]; }
}
VS.
public class Stack<T>
{
int position;
T[] data = new T[100];
public void Push(T obj) {data[position++] = obj; }
public T Pop() { return data[--position]; }
}
Which one should I use and under what conditions? It seems like with the System.Object way I can have objects of all sorts of types currently living within my Stack. So wouldn't this be always preferable? Thanks!
Always use generics! Using object's results in cast operations and boxing/unboxing of value-types. Because of these reasons generics are faster and more elegant (no casting). And - the main reason - you won't get InvalidCastExceptions using generics.
So, generics are faster and errors are visible at compile-time. System.Object means runtime exceptions and casting which in general results in lower performance (sometimes MUCH lower).
A lot of people have recommended using generics, but it looks like they all miss the point. It's often not about the performance hit related to boxing primitive types or casting, it's about getting the compiler to work for you.
If I have a list of strings, I want the compiler to prove to me that it will always contain a list of strings. Generics does just that - I specify the intent, and the compiler proves it for me.
Ideally, I would prefer an even richer type system where you could say for example that a type (even if it was a reference type) could not contain null values, but C# does unfortunately not currently offer that.
While there are times when you will want to use a non-generic collection (think caching, for instance), you almost always have collections of homogenous objects not heterogenous objects. For a homogenous collection, even if it is a collection of variants of base type or interface, it's always better to use generics. This will save you from having to cast the result as the real type before you can use it. Using generics makes your code more efficient and readable because you can omit the code to do the cast.
It all depends on what you need in the long run.
Unlike most answers here, I won't say "always use generics" because sometimes you do need to mix cats with cucumbers.
By all means, try to stick with generics for all the reasons already given in the other answers, for example if you need to combine cats and dogs create base class Mammal and have Stack<Mamal>.
But when you really need to support every possible type, don't be afraid to use objects, they don't bite unless you're mistreating them. :)
With the object type, as you say you need to perform boxing and unboxing, which gets tedious very quickly. With generics, there's no need for that.
Also, I'd rather be more specific as to what kind of objects a class can work with and generics provides a great basis for that. Why mix unrelated data types in the first place? Your particular example of a stack emphasizes the benefit of generics over the basic object data type.
// This stack should only contain integers and not strings or floats or bools
Stack<int> intStack = new Stack<int>();
intStack.Push(1);
Remember that with generics you can specify interfaces so your class can interact with objects of many different classes, provided they all implement the same interface.
Use generics when you want your structure to handle a single type. For example, if you wanted a collection of strings you would want to instantiate a strongly typed List of strings like so:
List<string> myStrings = new List<string>();
If you want it to handle multiple types you can do without generics but you will incur a small performance hit for boxing/unboxing operations.
Generics are always preferred if possible.
Aside from performance, Generics allow you to make guarantees about the types of objects that you're working with.
The main reason this is preferred to casting is that the compiler knows what type the object is, and so it can give you compile errors that you find right away instead of runtime errors that might only happen under certain scenarios that you didn't test.
Generics are not golden hammer. In cases where your activity naturally is non-generic, use good old object. One such case - caching. Cache naturally can hold different types. I've recently seen this implementation of cache wrapper
void AddCacheItem<T>(string key, T item, int duration, ICacheItemExpiration expiration)
{
. . . . . . .
CacheManager.Add(cacheKey, item, .....
}
Question: what for, if CacheManager takes object?
Then there was real havoc in Get
public virtual T GetCacheItem<T>(string cacheKey)
{
return (T)CacheManager.GetData(cacheKey); // <-- problem code
}
The problem above is that value type will crash.
I mended the method by adding this
public T GetCacheItem<T>(string cacheKey) where T : class
Because I like idea of doing this
var x = GetCacheItem<Person>("X")?
string name = x?.FullName;
But I added new method, which will allow to take value types as well
public object GetCacheItem(string cacheKey)
The bottom line, there is usage for object, especially when storing different types in collection. Or when you have compositions where completely arbitrary and unrelated objects can exist when you need to consume them based on type.
Ok, so here's the question... is the new keyword obsolete?
Consider in C# (and java, I believe) there are strict rules for types. Classes are reference types and can only be created on the heap. POD types are created on the stack; if you want to allocate them on the heap you have to box them in an object type. In C#, structs are the exception, they can be created on the stack or heap.
Given these rules, does it make sense that we still have to use the new keyword? Wouldn't it make sense for the language to use the proper allocation strategy based on the type?
For example, we currently have to write:
SomeClassType x = new SomeClassType();
instead of
SomeClassType x = SomeClassType();
or even just
SomeClassType x;
The compiler would, based on that the type being created is a reference type, go ahead and allocate the memory for x on the heap.
This applies to other languages like ruby, php, et al. C/C++ allow the programmer more control over where objects are created, so it has good reason to require the new keyword.
Is new just a holdover from the 60's and our C based heritage?
SomeClassType x = SomeClassType();
in this case SomeClassType() might be a method located somewhere else, how would the compiler know whether to call this method or create a new class.
SomeClassType x;
This is not very useful, most people declare their variables like this and sometimes populate them later when they need to. So it wouldn't be useful to create an instance in memory each time you declare a variable.
Your third method will not work, since sometimes we want to define a object of one type and assign it to a variable of another type. For instance:
Stream strm = new NetworkStream();
I want a stream type (perhaps to pass on somewhere), but internally I want a NetworkStream type.
Also many times I create a new object while calling a method:
myobj.Foo(new NetworkStream());
doing that this way:
myobj.Foo(NetworkStream());
is very confusing. Am I creating an object, or calling a method when I say NetworkStream()?
If you could just write SomeClassType x; and have it automatically initialized, that wouldn't allow for constructors with any parameters. Not every SomeClassType will have a parameterless constructor; how would the compiler know what arguments to supply?
public class Repository
{
private IDbConnection connection;
public Repository(IDbConnection connection)
{
if (connection == null)
{
throw new ArgumentNullException("connection");
}
this.connection = connection;
}
}
How would you instantiate this object with just Repository rep;? It requires a dependent object to function properly.
Not to mention, you might want to write code like so:
Dictionary<int, SomeClass> instances = GetInstancesFromSomewhere();
SomeClass instance;
if (instances.TryGetValue(1, out instance))
{
// Do something
}
Would you really want it auto-initializing for you?
If you just wrote SomeClassType x = SomeClassType() then this makes no distinction between a constructor and a method in scope.
More generally:
I think there's a fundamental misunderstanding of what the new keyword is for. The fact that value types are allocated on the stack and "reference" types are allocated on the heap is an implementation detail. The new keyword is part of the specification. As a programmer, you don't care whether or not it's allocated on the heap or stack (most of the time), but you do need to specify how the object gets initialized.
There are other valid types of initializers too, such as:
int[] values = { 1, 2, 3, 4 };
Voilà, an initialization with no new. In this case the compiler was smart enough to figure it out for you because you provided a literal expression that defines the entire object.
So I guess my "answer" is, don't worry about where the object exists memory-wise; use the new keyword as it's intended, as an object initializer for objects that require initialization.
For starters:
SomeClassType x;
is not initialized so no memory should be allocated.
Other than that, how do you avoid problems where there is a method with the same name as the class.
Say you write some code:
int World() { return 3; }
int hello = World();
and everything is nice and jolly.
Now you write a new Class later:
class World
{
...
}
Suddenly your int hello = World() line is ambiguous.
For performance reasons, this might be a bad idea. For instance, if you wanted to have x be a reference for an object that's already been created, it would be a waste of memory and processor time to create a new object then immediately dispose of it.
Wouldn't it make sense for the
language to use the proper allocation
strategy based on the type?
That's exactly what the C# compiler/runtime already does. The new keyword is just the syntax for constructing an object in whatever way makes sense for that object.
Removing the new keyword would make it less obvious that a constructor is being called. For a similar example, consider out parameters:
myDictionary.TryGetValue(key, out val);
The compiler already knows that val is an out. If you don't say so, it complains. But it makes the code more readable to have it stated.
At least, that is the justification - in modern IDEs these things could be found and highlighted in other ways besides actual inserted text.
Is new just a holdover from the 60's
and our C based heritage?
Definitely not. C doesn't have a new keyword.
I've been programming with Java for a number of years and I have never care if my object is on the heap or the stack. From that perspective is all the same to me to type new or don't type it.
I guess this would be more relevant for other languages.
The only thing I care is the class have the right operations and my objects are created properly.
BTW, I use ( or try ) to use the new keyword only in the factory merthod so my client looks like this anyway
SomeClasType x = SomeClasType.newInstance();
See: Effective Java Item:1
If you don't have a parameterless constructor, this could get ugly.
If you have multiple constructors, this could get real ugly.
There are a number of questions already on the definition of "ref" and "out" parameter but they seem like bad design. Are there any cases where you think ref is the right solution?
It seems like you could always do something else that is cleaner. Can someone give me an example of where this would be the "best" solution for a problem?
In my opinion, ref largely compensated for the difficulty of declaring new utility types and the difficulty of "tacking information on" to existing information, which are things that C# has taken huge steps toward addressing since its genesis through LINQ, generics, and anonymous types.
So no, I don't think there are a lot of clear use cases for it anymore. I think it's largely a relic of how the language was originally designed.
I do think that it still makes sense (like mentioned above) in the case where you need to return some kind of error code from a function as well as a return value, but nothing else (so a bigger type isn't really justified.) If I were doing this all over the place in a project, I would probably define some generic wrapper type for thing-plus-error-code, but in any given instance ref and out are OK.
Well, ref is generally used for specialized cases, but I wouldn't call it redundant or a legacy feature of C#. You'll see it (and out) used a lot in XNA for example. In XNA, a Matrix is a struct and a rather massive one at that (I believe 64 bytes) and it's generally best if you pass it to functions using ref to avoid copying 64 bytes, but just 4 or 8. A specialist C# feature? Certainly. Of not much use any more or indicative of bad design? I don't agree.
One area is in the use of small utility functions, like :
void Swap<T>(ref T a, ref T b) { T tmp = a; a = b; b = tmp; }
I don't see any 'cleaner' alternatives here. Granted, this isn't exactly Architecture level.
P/Invoke is the only place I can really think of a spot where you must use ref or out. Other cases, they can be convenient, but like you said, there is generally another, cleaner way.
What if you wanted to return multiple objects, that for some unknown reason are not tied together into a single object.
void GetXYZ( ref object x, ref object y, ref object z);
EDIT: divo suggested using OUT parameters would be more appropriate for this. I have to admit, he's got a point. I'll leave this answer here as a, for the record, this is an inadaquate solution. OUT trumps REF in this case.
I think the best uses are those that you usually see; you need to have both a value and a "success indicator" that is not an exception from a function.
One design pattern where ref is useful is a bidirectional visitor.
Suppose you had a Storage class that can be used to load or save values of various primitive types. It is either in Load mode or Save mode. It has a group of overloaded methods called Transfer, and here's an example for dealing with int values.
public void Transfer(ref int value)
{
if (Loading)
value = ReadInt();
else
WriteInt(value);
}
There would be similar methods for other primitive types - bool, string, etc.
Then on a class that needs to be "transferable", you would write a method like this:
public void TransferViaStorage(Storage s)
{
s.Transfer(ref _firstName);
s.Transfer(ref _lastName);
s.Transfer(ref _salary);
}
This same single method can either load the fields from the Storage, or save the fields to the Storage, depending what mode the Storage object is in.
Really you're just listing all the fields that need to be transferred, so it closely approaches declarative programming instead of imperative. This means that you don't need to write two functions (one for reading, one for writing) and given that the design I'm using here is order-dependent then it's very handy to know for sure that the fields will always be read/written in identical order.
The general point is that when a parameter is marked as ref, you don't know whether the method is going to read it or write to it, and this allows you to design visitor classes that work in one of two directions, intended to be called in a symmetrical way (i.e. with the visited method not needing to know which direction-mode the visitor class is operating in).
Comparison: Attributes + Reflection
Why do this instead of attributing the fields and using reflection to automatically implement the equivalent of TransferViaStorage? Because sometimes reflection is slow enough to be a bottleneck (but always profile to be sure of this - it's hardly ever true, and attributes are much closer to the ideal of declarative programming).
The real use for this is when you create a struct. Structs in C# are value types and therefore always are copied completely when passed by value. If you need to pass it by reference, for example for performance reasons or because the function needs to make changes to the variable, you would use the ref keyword.
I could see if someone has a struct with 100 values (obviously a problem already), you'd likely want to pass it by reference to prevent 100 values copying. That and returning that large struct and writing over the old value would likely have performance issues.
The obvious reason for using the "ref" keyword is when you want to pass a variable by reference. For example passing a value type like System.Int32 to a method and alter it's actual value. A more specific use might be when you want to swap two variables.
public void Swap(ref int a, ref int b)
{
...
}
The main reason for using the "out" keyword is to return multiple values from a method. Personally I prefer to wrap the values in a specialized struct or class since using the out parameter produces rather ugly code. Parameters passed with "out" - is just like "ref" - passed by reference.
public void DoMagic(out int a, out int b, out int c, out int d)
{
...
}
There is one clear case when you must use the 'ref' keyword. If the object is defined but not created outside the scope of the method that you intend to call AND the method you want to call is supposed to do the 'new' to create it, you must use 'ref'. e.g.{object a; Funct(a);} {Funct(object o) {o = new object; o.name = "dummy";} will NOT do a thing with object 'a' nor will it complain about it at either compile or run time. It just won't do anything. {object a; Funct(ref a);} {Funct(object ref o) {o = new object(); o.name = "dummy";} will result in 'a' being a new object with the name of "dummy". But if the 'new' was already done, then ref not needed (but works if supplied). {object a = new object(); Funct(a);} {Funct(object o) {o.name = "dummy";}