After reading this question asking what exactly a “Special Class” is, I am left with the question why the six classes System.Object, System.Array, System.Delegate, System.Enum and System.ValueType were chosen and hard-coded as special classes, preventing them from being used as constraints to generic classes or methods.
It is quite conceivable to understand why System.Object is in there; all classes inherit System.Object so there is no need to include it as a constraint. What I am unclear about is why the others were chosen to be part of this special classes category.
PS: The Special Classes raise the compile error CS0702 when an attempt is made to use them as constraints.
Those classes were already different before generic constraints, or even generics, were added to the .NET framework and support for them added to the C# language.
What each of them have in common, is that inheriting from them is different than with other types:
System.Object: You can't not inherit from this in C#.
System.Array: You inherit from this by creating an array of an existing type (Array x = new int[2]; etc.)
System.Delegate: You inherit from this by creating a delegate (which then derives from MulticastDelegate, also a "special type", which derives from Delegate).
System.Enum: You inherit from this by creating an enum.
System.ValueType: You inherit from this by creating a struct.
Now, note that aside from new() generic constraints are all about inheritance or implementation of interfaces (which is akin to inheritance in many ways). Indeed the other restrictions are that you can't use a pointer type, and you can't use a sealed type; two cases where you can't have a derived type anyway (though the the ban on sealed types is primarily because you are likely creating a generic type or method when you don't need too, and is an attempt to protect you from yourself).
And as such code that is based on inheritance features (as constraints are) when faced with special cases about inheritance will likely have to itself involve special cases. Those special cases were dealt with in the simplest way: By prohibiting them.
The value is also less in many of these cases:
System.Object: Since the only types that can't be converted to System.Object are pointer types, and these can't be used as generic parameters anyway, any such constraint is redundant.
System.Array: You can define in terms of element types: void DoSomethingWithArray<T>(T[] array) etc.
System.Delegate: Such would be useful, though in many cases we can define in terms of parameter and/or return types, but there are cases this doesn't catch.
System.Enum: Would be useful.
System.ValueType: Already dealt with; constrain as struct. Conversely we can also constrain as class to exclude this case so we've actually a "not inherited from…" option we don't have otherwise.
This is not to deny that being able to constrain in terms of Delegate, MulticastDelegate or Enum would not be useful (probably most so we Enum), but in terms of justifying the extra work to cover these types the others would give little or no benefit, so the benefit of less restrictions is reduced.
Related
For a generic class, foo<T> I can specify methods that only are available to certain types of the class, for example
public static void(this foo<int> val) is a valid declaration, as is
public static void(this foo<float> val) and in this way, I can specify different behaviors for a generic I intend to use the same way, handling the difference between them in their individual methods. Most useful case of this is to add Arithmetic functions to Vectors, Matrices, and Sets while still having them generic. (Since there is for some god forsaken reason still no INumeric constraint to define only types which support the basic operators)
Is there an equivalent functionality for static variables. The objective being, I can accomplish the following:
foo<int>.Zero and foo<float>.Zero and have each one be different, and not conflict with each other, returning a foo of the appropriate type, but without throwing errors in the case where foo<bar>.Zero because there is no "Zero" concept for an object like bar
C# does not provide with syntax that allow you to define field for property extensions, or static extensions for classes.
To achieve this, you should use some static property like foo.ZeroOfFloat as some C# standard APIs are defined like that.
It looks very pretty and cleaned when you are using more generic things, but this is not the correct use case of generic.
"Generic" means that it is general, for type irrelavent use, therefore it is different from template definitions in C++ that supports specialization. Technically, generic definitions in C# is compiled and delivered as IL, rather than as source code in C++, which means the compilation happens before a certain type is plugged into a generic definition and there cannot be a specialization for a single type.
Although there are some generic constraints for reference types, value types, delegates, or native structs etc., that limit the use of generic definition to some type of types. This disallows some use cases, rather than supporting something.
Although you could make use of that there could be different extension methods on a generic type that is closed with different type arguments (that gives you a feeling of template specialization), that is not by design.
Do not achieve a specific logic in generic definition. Definitions like ZeroOfFloat are never bad practices.
Recently I found out that C# allows for
An interface can inherit from one or more base interfaces.
For instance, the IScreen in Caliburn.Micro does this in http://caliburnmicro.codeplex.com/SourceControl/latest#src/Caliburn.Micro/IScreen.cs
namespace Caliburn.Micro
{
public interface IScreen : IHaveDisplayName, IActivate, IDeactivate,
IGuardClose, INotifyPropertyChangedEx
{
}
}
I get why this is useful, as it implies that a class implementing IScreen also needs to implement the other interfaces.
But I wonder how the C# handles that compiler and run-time wise.
A little background/context of this question:
I come from a background where interfaces define a method-table, and that classes implementing interfaces have both their own method table, and pointers to the method tables of the interfaces they implement.
Sub questions swirling my mind stem from various multiple class inheritance discussions I had with people in the past, of which I think they apply to this case as well:
Having an interface be able to inherit from multiple base interfaces, how would the order of methods in that table be?
What if those interfaces have common ancestors: would those methods appear multiple times in the table?
What if those interfaces have different ancestors, but similar method names?
(I'm using the word methods here, implying that a property defined in an interface will have a get_ or set_ method).
Any insight into this is much appreciated, as well as tips on how to phrase this question better.
First of all, let's be explicit in saying that "interface inheritance" is not quite the same thing as class-based inheritance (and using the word "inheritance" for both is perhaps misleading).
That's because interfaces cannot be instantiated on their own, so the compiler/runtime pair does not have to keep track of how to make a virtual call for standalone interface types (e.g. you don't need to know how to call IEnumerable.GetEnumerator -- you just need to know how to call it on a specific type of object). That allows for handling things differently at compile time.
Now I don't actually know how the compiler implements "interface inheritance", but here's how it could be doing it:
Having an interface be able to inherit from multiple base interfaces,
how would the order of methods in that table be?
It's not necessary for the "derived" interface to have a method table that includes the methods from all of its ancestor interfaces because it does not actually implement any of them. It's enough for each interface type to only have a table of methods it defines itself.
What if those interfaces have common ancestors: would those methods
appear multiple times in the table?
Given the answer to the previous question, no. In the end a concrete type will only implement IFoo just once, regardless of how many times IFoo appears in the "hierarchy" of implemented interfaces. Methods defined in IFoo will only appear in IFoo's bookkeeping tables.
What if those interfaces have different ancestors, but similar method
names?
Again, no problem. You need appropriate syntax to tell the compiler "here's how to implement IFoo.Frob and here's IBar.Frob", but since methods of IFoo and IBar will be mapped in separate tables there are no technical issues.
Of course this leaves the question "how are methods dispatched at runtime?" unanswered. But it's not that difficult to imagine a possible solution: each concrete type C has pointers to one method table per interface it implements. When it's time to make a virtual method call the runtime looks at the concrete type, finds the table for the interface whose method is going to be called (the type of the interface is known statically) and makes the call.
I am unable to speak to how the official CLR does it.. but the Rotor distribution aggresively overlays common interface ancestors on top of one another in an objects vtable. It also allocates extra SLOTs into the concrete objects vtable where appropriate thereby reducing the need to jump from concrete type to interface vtable and then to the implementation.. the method offset is calculated at JIT time. If this optimization cannot be performed, then a single method can occupy the vtable more than once.
So the answer is (in regards to Rotor anyway), that it really is an implementation detail and any overlaying/optimizations etc are left completely up to what the compiler decides is best at the time it compiles the type.
I've been looking into empty interfaces and abstract classes and from what I have read, they are generally bad practice. I intend to use them as the foundation for a small search application that I am writing. I would write the initial search provider and others would be allowed to create their own providers as well. My code's intent is enforce relationships between the classes for anyone who would like to implement them.
Can someone chime in and describe if and why this is still a bad practice and what, if any alternatives are available.
namespace Api.SearchProviders
{
public abstract class ListingSeachResult
{
public abstract string GetResultsAsJSON();
}
public abstract class SearchParameters
{
}
public interface IListingSearchProvider
{
ListingSeachResult SearchListings(SearchParameters p);
}
}
Empty classes and interfaces are generally only "usably useful" as generic constraints; the types are not usable by themselves, and generic constraints are generally the only context in which one may use them in conjunction with something else useful. For example, if IMagicThing encapsulates some values, some implementations are mutable, and some aren't, a method which wants to record the values associated with an IMagicThing might be written something like:
void RecordValues<T>(T it) where T:IImagicThing,IIsImmutable {...}
where IIsImmutable is an empty interface whose contract says that any class which implements it and reports some value for any property must forevermore report the same value for that property. A method written as indicated could know that its parameter was contractually obligated to behave as an immutable implementation of IMagicThing.
Conceptually, if various implementations of an interface will make different promises regarding their behaviors, being able to combine those promises with constraints would seem helpful. Unfortunately, there's a rather nasty limitation with this approach: it won't be possible to pass an object to the above method unless one knows a particular type which satisfies all of the constraints, and from which object derives. If there were only one constraint, one could cast the object to that type, but that won't work if there are two or more.
Because of the above difficulty when using constrained generics, it's better to express the concept of "an IMagicThing which promises to be immutable" by defining an interface IImmutableMagicThing which derives from IMagicThing but adds no new members. A method which expects an IImmutableMagicThing won't accept any IMagicThing that doesn't implement the immutable interface, even if it happens to be immutable, but if one has a reference to an IMagicThing that happens to implement IImmutableMagicThing, one can cast that reference to the latter type and pass it to a routine that requires it.
Incidentally, there's one other usage I can see for an empty class type: as an identity token. A class need not have any members to serve as a dictionary key, a monitor lock, or the target of a weak reference. Especially if one has extension methods associated with such usage, defining an empty class for such purpose may be much more convenient than using Object.
1) Are the reasons why IEqualityComparer<T> was introduced:
a) so we would be able to compare objects (of particular type) for equality in as many different ways as needed
b) and by having a standard interface for implementing a custom equality comparison, chances are that much greater that third party classes will accept this interface as a parameter and by that allow us to inject into these classes equality comparison behavior via objects implementing IEqualityComparer<T>
2) I assume IEqualityComparer<T> should not be implemented on type T that we're trying to compare for equality, but instead we should implement it on helper class(es)?
Thank you
I'm doubtful that anyone here will be able to answer with any authority the reason that the interface was introduced (my guess--and that's all it is--would be to support one of the generic set types like Dictionary<TKey, TValue> or HashSet<T>), but its purpose is clear:
Defines methods to support the comparison of objects for equality.
If you combine this with the fact you can have multiple types implementing this interface (see StringComparer), then the answer to question a is yes.
The reason for this is threefold:
Operators (in this case, ==) are not polymorphic; if the type is upcasted to a higher level than where the type-specific comparison logic is defined, then you'll end up performing a reference comparison rather than using the logic within the == operator.
Equals() requires at least one valid reference and can provide different logic depending on whether it's called on the first or second value (one could be more derived and override the logic of the other).
Lastly and most importantly, the comparison logic provided by the type may not be what the user is after. For example, strings (in C#) are case sensitive when compared using == or Equals. This means that any container (like Dictionary<string, T> or HashSet<string>) would be case-sensitive. Allowing the user to provide another type that implements IEqualityComparer<string> means that the user can use whatever logic they like to determine if one string equals the other, including ignoring case.
As for question b, probably, though I wouldn't be surprised if this wasn't high on the list of priorities.
For your final question, I'd say that's generally true. While there's nothing stopping you from doing so, it is confusing to think that type T would provide custom comparison logic that is different from that provided on type T just because it's referenced as an IEqualiltyComparer<T>.
agreed on a and b
"should not be" is always a normative question and rarely a good metric. You do what works without getting into trouble. (Pragmatic Programmer). The fact that you can implement the interface statefull, stateless and in any which way, makes it possible to implement (alternative) comparers for all types, including value types, enums, sealed types, even abstract types; In essence it is a Strategy pattern
Sometimes there's a natural equality comparison for a type, in which case it should implement IEquatable<T>, not IEqualityComparer<T>. At other times, there are multiple possible ways of comparing objects for equality - so it makes sense to implement IEqualityComparer<T> then. It allows hash tables (and sets etc) to work in a flexible way.
In the code below the "Move" public class derives fromthe generic type "Submit". "Submit" is a method, part of the DSS model, which handles messages and accepts two parameters, one is the message body and one is the message response.
My question is: How or WHY does a class derive from a method?!
It seems to me (since i'm only a beginner) "generic types" mean just this... any method or class (and by extension, any "block" of code) can become a type. Moreover there are NO types... everything is just a "class" which you can derive from (yet you probably can't overload string)
This basicly means that there are in fact NO methods OR types, but rather just classes (and some "sub"classes (ex-methods)) and you can derive from everything?!
Thank you.
I'm not looking for the expert "except this" answear where some small thing is not possible. I would like confirmation that this is, in fact, what 90% of the time, programmers are doing.
public class Move : Submit<MoveRequest, PortSet<DefaultSubmitResponseType, Fault>>
{
public Move()
{
}
public Move(MoveRequest body) : base(body)
{
}
}
You can not derive from a method. Submit<T, V, E> must be a class.
in a little more detail, Submit may be an "action" and therefore commonly thought of as a method, but in your context, Submit is indeed a class. This may be an example of the "Command" design pattern, in which a request for an action is encapsulated in an object and thus can be passed around and acted on by classes that handle the command.
Generics, conceptually speaking, are classes that are able to provide similar functionality among a set of "inner" types. The basic example is a Math class that can add, subtract, multiply and divide two variables of numeric type; you know, very advanced math you can't do any other way. There are a lot of numeric types in most type systems (in C# you have byte, short, int, long, float, double, and decimal, plus unsigned variations). Rather than implement a MathByte, MathInt, MathLong, etc with methods strongly defining the type they work on, or implementing a Math class that works with any Object (and thus requires you to examine the type of everything passed in to determine that you can work with the type), you can simply create a Math<T> class, where T can be any of the numeric types.
The type parameter T is different from method parameters; when you declare an instance of the class, you specify a type that the instance will be set up to handle. That instance can then only work with objects of the specified type, but you can instantiate a Math<byte> and a Math<decimal> to work with different types. Methods defined in Math specify input parameters of type T, and T is "replaced" at instantiation with the type declared when you instantiate the class.
Generics help support the DRY ("Don't Repeat Yourself") tenet of good coding practice, while maintaining type integrity. MathLong, MathInt, MathByte etc would all be similar or identical in their internal code; the main difference would be the type of the object they work on. Instead of rewriting the same class 10 times, you write one class that can be more concretely defined as to its working type by consumers of your class.
Hope this is a little more educational.
No, Submit is definitely not a method. A Class can only derive from another class, or implement an interface. So, Submit has to be either a class or an interface.