This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Generic methods and method overloading
Ok, I hit this one by accident... Giving this situation:
class Program {
static void Main( string[ ] args ) {
var obj = new gen<int>( );
Console.Write( obj[ 1 ] );
Console.ReadKey( );
}
}
class gen<T> {
public int this[ T i ] { get { return 2; } }
public int this[ int i ] { get { return 1; } }
}
It will always print 1. I would have expected for the compiler to complain, or the runtime to crash and burn and melt the CPU, but no, it is happy to print '1'
Of course I can make a choice to return either if I use any other type for the generic parameter. For giggles, I try using UInt as the generic type parmater, and I can differentiate between the calls, so the questions are:
Why C# does not freak out? Shouldn't Anders Hejlsberg feel a disturbance in the force?
How can I restrict a generic parameter from certain types? As in this T can be anything but ints (but long are OK)
I believe this is specified in section 7.5.3.2 of the C# 4 spec (Better Function Member).
Otherwise, if MP has more specific parameter types than MQ, then MP is better than MQ [...]
A type parameter is less specific than a nontype parameter
[...]
So here, the member with the T parameter is less specific than the member with the int parameter.
Try to design your way out of this one simply by not overloading like this. It's hard for indexers of course, but you could always provide methods instead (or possibly as well, as a fall-back).
EDIT: Note that if you have overloaded methods where both are type parameters, the compiler will complain:
public class Foo<T1, T2>
{
public void Bar(T1 t1) {}
public void Bar(T2 t2) {}
}
...
Foo<int, int> foo = new Foo<int, int>();
foo.Bar(10); // Error
Here neither method is more specific.
How can I restrict a generic parameter from certain types? As in this T can be anything but ints (but long are OK)
This is really an entirely separate question, but basically you can't. You can constrain type parameters in various ways, but not by explicitly including and excluding types. See the MSDN page on constraints for more information.
As Eric Lippert says:
The C# specification says that when you have a choice between calling ReallyDoIt(string) and ReallyDoIt(string) – that is,
when the choice is between two methods that have identical signatures,
but one gets that signature via generic substitution – then we pick
the “natural” signature over the “substituted” signature.
Also this process described in C# spec 7.5.3.2 (Better function member):
In case the parameter type sequences {P1, P2, …, PN} and {Q1, Q2, …, QN} are equivalent (i.e. each Pi has an identity conversion to the corresponding Qi), the following tie-breaking rules are applied, in order, to determine the better function member.
If MP is a non-generic method and MQ is a generic method, then MP is better than MQ (as John pointed, this is true when you have generic method, not generic type)
...
Otherwise, if MP has more specific parameter types than MQ, then MP is better than MQ. Let {R1, R2, …, RN} and {S1, S2, …, SN} represent the uninstantiated and unexpanded parameter types of MP and MQ. MP’s parameter types are more specific than MQ’s if, for each parameter, RX is not less specific than SX, and, for at least one parameter, RX is more specific than SX:
A type parameter is less specific than a non-type parameter (this is your case - thus methods are not generic, and inferred parameter type equals to non-generic parameter type)
The C# compiler always chooses the more-specific versus the more-generic method if the call could fit in both =). That's why it doesn't freak out, he just follows his rules.
C# compiler doesn't freak out because both method are valid, and both can be called.
Here's an example that return "2":
Gen<Form> gen = new Gen<Form>();
textBox1.Text = gen[this].ToString();
Where "this" is a form. Of course, using an index accessor as an object instead of a number... Well, whatever, it works.
But like everybody else said, the compiler will prefer the explicit over the implicit.
Related
I think I understand why this small C# console application will not compile:
using System;
namespace ConsoleApp1
{
class Program
{
static void WriteFullName(Type t)
{
Console.WriteLine(t.FullName);
}
static void Main(string[] args)
{
WriteFullName(System.Text.Encoding);
}
}
}
The compiler raises a CS0119 error: 'Encoding' is a type which is not valid in the given context. I know that I can produce a Type object from its name by using the typeof() operator:
...
static void Main(string[] args)
{
WriteFullName(typeof(System.Text.Encoding));
}
...
And everything works as expected.
But to me, that use of typeof() has always seemed somewhat redundant. If the compiler knows that some token is a reference to a given type (as error CS0119 suggests) and it knows that the destination of some assignment (be it a function parameter, a variable or whatever) expects a reference to a given type, why can't the compiler take it as an implicit typeof() call?
Or maybe the compiler is perfectly capable of taking that step, but it has been chosen not to because of the problems that might generate. Would that result in any ambiguity/legibility issues that I cannot think of right now?
If the compiler knows that some token is a reference to a given type (as error CS0119 suggests) and it knows that the destination of some assignment (be it a function parameter, a variable or whatever) expects a reference to a given type, why can't the compiler take it as an implicit typeof() call?
First off, your proposal is that the compiler reason both "inside to outside" and "outside to inside" at the same time. That is, in order to make your proposed feature work the compiler must both deduce that the expression System.Text.Encoding refers to a type and that the context -- a call to WriteFullName -- requires a type. How do we know that the context requires a type? The resolution of WriteFullName requires overload resolution because there could be a hundred of them, and maybe only one of them takes a Type as an argument in that position.
So now we must design overload resolution to recognize this specific case. Overload resolution is hard enough already. Now consider the implications on type inference as well.
C# is designed so that in the vast majority of cases you do not need to do bidirectional inference because bidirectional inference is expensive and difficult. The place where we do use bidirectional inference is lambdas, and it took me the better part of a year to implement and test it. Getting context-sensitive inference on lambdas was a key feature that was necessary to make LINQ work and so it was worth the extremely high burden of getting bidirectional inference right.
Moreover: why is Type special? It's perfectly legal to say object x = typeof(T); so shouldn't object x = int; be legal in your proposal? Suppose a type C has a user-defined implicit conversion from Type to C; shouldn't C c = string; be legal?
But let's leave that aside for a moment and consider the other merits of your proposal. For example, what do you propose to do about this?
class C {
public static string FullName = "Hello";
}
...
Type c = C;
Console.WriteLine(c.FullName); // "C"
Console.WriteLine(C.FullName); // "Hello"
Does it not strike you as bizarre that c == C but c.FullName != C.FullName ? A basic principle of programming language design is that you can stuff an expression into a variable and the value of the variable behaves like the expression, but that is not at all true here.
Your proposal is basically that every expression that refers to a type has a different behaviour depending on whether it is used or assigned, and that is super confusing.
Now, you might say, well, let's make a special syntax to disambiguate situations where the type is used from situations where the type is mentioned, and there is such a syntax. It is typeof(T)! If we want to treat T.FullName as T being Type we say typeof(T).FullName and if we want to treat T as being a qualifier in a lookup we say T.FullName, and now we have cleanly disambiguated these cases without having to do any bidirectional inference.
Basically, the fundamental problem is that types are not first class in C#. There are things you can do with types that you can only do at compile time. There's no:
Type t = b ? C : D;
List<t> l = new List<t>();
where l is either List<C> or List<D> depending on the value of b. Since types are very special expressions, and specifically are expressions that have no value at runtime they need to have some special syntax that calls out when they are being used as a value.
Finally, there is also an argument to be made about likely correctness. If a developer writes Foo(Bar.Blah) and Bar.Blah is a type, odds are pretty good they've made a mistake and thought that Bar.Blah was an expression that resolves to a value. Odds are not good that they intended to pass a Type to Foo.
Follow up question:
why is it possible with method groups when passed to a delegate argument? Is it because usage and mentioning of a method are easier to distinguish?
Method groups do not have members; you never say:
class C { public void M() {} }
...
var x = C.M.Whatever;
because C.M doesn't have any members at all. So that problem disappears. We never say "well, C.M is convertible to Action and Action has a method Invoke so let's allow C.M.Invoke(). That just doesn't happen. Again, method groups are not first class values. Only after they are converted to delegates do they become first class values.
Basically, method groups are treated as expressions that have a value but no type, and then the convertibility rules determine what method groups are convertible to what delegate types.
Now, if you were going to make the argument that a method group ought to be convertible implicitly to MethodInfo and used in any context where a MethodInfo was expected, then we'd have to consider the merits of that. There has been a proposal for decades to make an infoof operator (pronounced "in-foof" of course!) that would return a MethodInfo when given a method group and a PropertyInfo when given a property and so on, and that proposal has always failed as too much design work for too little benefit. nameof was the cheap-to-implement version that got done.
A question you did not ask but which seems germane:
You said that C.FullName could be ambiguous because it would be unclear if C is a Type or the type C. Are there other similar ambiguities in C#?
Yes! Consider:
enum Color { Red }
class C {
public Color Color { get; private set; }
public void M(Color c) { }
public void N(String s) { }
public void O() {
M(Color.Red);
N(Color.ToString());
}
}
In this scenario, cleverly called the "Color Color Problem", the C# compiler manages to figure out that Color in the call to M means the type, and that in the call to N, it means this.Color. Do a search in the specification on "Color Color" and you'll find the rule, or see blog post Color Color.
I was wondering if C# supported implicit type discovery for class generics.
For example, such functionaly exists on method generics.
I can have the following method:
public void Foo<T>(T obj);
And call it like this:
int n = 0;
instance.Foo(n);
As you can see, I'm not specifying the <int> generic constraint. It's being implicitly discovered, because I passed an int value.
I want to accomplish something similiar on a class definition level:
internal interface IPersistenceStrategy<E, T> : IDisposable
where E : UniqueEntity<T>
I want it to be defined as IPersistenceStrategy<MyEntity>, where MyEntity is an UniqueEntity<int>.
As you can see, the T type param, is being implicitly discovered from MyEntity.
However, this does not work. I have to supply the T param explicitly:
IPersistenceStrategy<MyEntity, int> myStrategy;
Why is this functionality not working? Is C# compiler not smart enough to discover my type param automatically?
Is there some way to accomplish what I am looking for?
There is no type inference in generic type declarations on initialization. You can only omit the generic argument when calling a generic method but it is not the case with initializing a generic type for example:
var list = new List { 2, 3, 4 };
Here you may expect compiler to see that you wanna create a list of int so there is no need to specify type argument.But it is not the case.
In your specific example let's assume compiler has inferred this :
IPersistenceStrategy<MyEntity> myStrategy;
as IPersistenceStrategy<MyEntity,int> then what should happen if there is another declaration in the same assembly such as:
interface IPersistenceStrategy<T> { }
Ofcourse this would cause an ambiguity. So that might be the one of the reasons why it is not allowed.
C# has type inference for methods, but not for constructors. This feature was proposed to be in C# 6 version, but seems was removed from release according to Mads Torgersen (http://blogs.msdn.com/b/csharpfaq/archive/2014/11/20/new-features-in-c-6.aspx).
Also have a look to Languages features in C# 6 and VB 14, i.e. there is no mention about it
I have two overloaded generic methods:
T Foo<T>(T t) { Console.WriteLine("T"); return t; }
T Foo<T>(int i) { Console.WriteLine("int"); return default(T); }
When I try to call Foo as follows on my computer:
Foo(5);
I get no compiler errors or warnings, and the first method with the generic argument is called (i.e. the output is T). Will this be the case in all C# incarnations and on all platforms? In that case, why?
On the other hand, if I explicitly specify the type in the generic call:
Foo<int>(5);
the second method with the int argument is called, i.e. the output is now int. Why?
I am using different argument names in my two method overloads, so the output from the following calls are as expected:
Foo<int>(t: 5); // output 'T'
Foo<int>(i: 5); // output 'int'
If I am calling the first method, I can even leave out the type specification:
Foo(t: 5); // output 'T'
But if I try to compile this:
Foo(i: 5);
I get an error The type arguments for method 'Foo(int)' cannot be inferred from the usage. Try specifying the type arguments explicitly. Why cannot the compiler deal with this call?
Note These tests have been performed with LinqPad on a Windows 8 x64 system (in case that is relevant to the results...)
Last question
Since you specified (by parameter name) that it should call the overload that takes an int parameter, the compiler has no idea what to pass for T.
First question
Because of this, Foo(5) only matches one overload (Foo<T>()).
Therefore, it must only call Foo<T>().
Second question
When you explicitly specify a type argument (<int>), both overloads are applicable.
In that case, Foo(int) is better, since its parameter is not of generic type.
As per the C# spec §7.5.3.2:
Otherwise, if MP has more specific parameter types than MQ, then MP is better than MQ. Let {R1, R2, …, RN} and {S1, S2, …, SN} represent the uninstantiated and unexpanded parameter types of MP and MQ. MP’s parameter types are more specific than MQ’s if, for each parameter, RX is not less specific than SX, and, for at least one parameter, RX is more specific than SX:
A type parameter is less specific than a non-type parameter.
(emphasis added)
public enum EnumTest
{
EnumEntry
}
public class TestClass
{
public string FunctionMember(string s, EnumTest t = EnumTest.EnumEntry)
{
return "Normal";
}
public string FunctionMember<T>(T t)
{
return "Generic";
}
}
class Program
{
static void Main(string[] args)
{
TestClass t = new TestClass();
Console.WriteLine(t.FunctionMember("a"));
}
}
This prints "Generic". Removing , EnumTest t = EnumTest.EnumEntry makes it print "Normal".
And yet the standard appears to be pretty clear, from 14.4.2.2 Better function member the first discriminator to be applied is:
If one of MP and MQ is non-generic, but the other is generic, then the non-generic is better.
Am I missing something or compiler bug?
You are missing something. And that is the following:
You call the method with one parameter. There is only one method that has one parameter, the generic one. So that's the one that's chosen.
Only if it didn't find a matching method it would look at other methods with optional parameters.
References:
C# 4.0 Specification, last paragraph in 21.4:
As a tie breaker rule, a function member for which all arguments where explicitly given is better than one for which default values were supplied in lieu of explicit arguments.
MSDN, heading "Overload resolution", last bullet point:
If two candidates are judged to be equally good, preference goes to a candidate that does not have optional parameters for which arguments were omitted in the call. This is a consequence of a general preference in overload resolution for candidates that have fewer parameters.
The C# Language Specification, Chapter "7.5.3.2 Better function member":
Parameter lists for each of the candidate function members are constructed in the following way:
The expanded form is used if the function member was applicable only in the expanded form.
Optional parameters with no corresponding arguments are removed from the parameter list
It continues like this:
Given an argument list A with a set of argument expressions { E1, E2, ..., EN } and two applicable function members MP and MQ with parameter types { P1, P2, ..., PN } and { Q1, Q2, ..., QN } [...]
At this point the method with the optional parameter is already out of the game. N is 1, but that method has two parameters.
The docs say:
If two candidates are judged to be equally good, preference goes to a candidate that does not have optional parameters for which arguments were omitted in the call. This is a consequence of a general preference in overload resolution for candidates that have fewer parameters.
In other words, the method without any optional arguments will be preferred.
With default values for method parameters the overload resolution got extended.
Conceptually the method overload resolution from pre v4 will be run. If that finds a match that match will be used. (Conceptually because this is not a description of how it works but how you can think of it)
In your case it finds exactly one match being your generic method
If it does not find a match it will look for methods that has a partial match and where the match can be completed with default values. In your case your none generice method would be found in this run however the resolution never comes this far due to already having found a match.
When removing the second paramter you end up in a situation where there's a generic and a non generic match. And the rule you qoute kicks in picking the non-generic.
All in all a good rule of thumb is that the most specific available method will be chosen.
A non-generic method that matches is more specific than a generic because the type can't vary.
A method with default parameters is less specific than one where the argument count matches the parameter count (the numbers are an exact match)
if two methods are available but one takes an argument of IFoo and the other takes a Foo (implemeting IFoo) then the latter will be chosen when passing a Foo object as argument because it's an exact match Ie. more specific
I wonder why it is not possible a method parameter as var type like
private void myMethod(var myValue) {
// do something
}
You can only use var for variables inside the method body. Also the variable must be assigned at declaration and it must be possible to deduce the type unambiguously from the expression on the right-hand side.
In all other places you must specify a type, even if a type could in theory be deduced.
The reason is due to the way that the compiler is designed. A simplified description is that it first parses everything except method bodies and then makes a full analysis of the static types of every class, member, etc. It then uses this information when parsing the method bodies, and in particular for deducing the type of local variables declared as var. If var were allowed anywhere then it would require a large change to the way the compiler works.
You can read Eric Lippert's article on this subject for more details:
Why no var on fields?
Because the compiler determines the actual type by looking at the right hand side of the assignment. For example, here it is determined to be a string:
var s = "hello";
Here it is determined to be Foo:
var foo = new Foo();
In method arguments, there is no "right hand side of the assignment", so you can't use var.
See the posting by Eric Lippert about why var is not allowed on fields, which also contains the explanation why it doesn't work in method signatures:
Let me give you a quick oversimplification of how the C# compiler works. First we run through every source file and do a "top level only" parse. That is, we identify every namespace, class, struct, enum, interface, and delegate type declaration at all levels of nesting. We parse all field declarations, method declarations, and so on. In fact, we parse everything except method bodies; those, we skip and come back to them later.
[...]
if we have "var" fields then the type of the field cannot be determined until the expression is analyzed, and that happens after we already need to know the type of the field.
Please see Juliet's answer for a better answer to this question.
Because it was too hard to add full type inference to C#.
Other languages such as Haskell and ML can automatically infer the most general type without you having to declare it.
The other answers state that it's "impossible" for the compiler to infer the type of var but actually it is possible in principle. For example:
abstract void anotherMethod(double z, double w);
void myMethod<T>(T arg)
{
anotherMethod(arg, 2.0); // Now a compiler could in principle infer that arg must be of type double (but the actual C# compiler can't)
}
Have "var" method parameters is in principle the same thing as generic methods:
void myMethod<T>(T arg)
{
....
}
It is unfortunate that you can't just use the same syntax for both but this is probably due to the fact that that C#'s type inference was added only later.
In general, subtle changes in the language syntax and semantics can turn a "deterministic" type inference algorithm into an undecidable one.
ML, Haskell, Scala, F#, SML, and other languages can easily figure out the type from equivalent expressions in their own language, mainly because they were designed with type-inference in mind from the very start. C# wasn't, its type-inference was tacked on as a post-hoc solution to the problem of accessing anonymous types.
I speculate that true Hindley-Milner type-inference was never implemented for C# because its complicated to deduce types in a language so dependent on classes and inheritance. Let's say I have the following classes:
class Base { public void Print() { ... } }
class Derived1 : Base { }
class Derived2 : Base { }
And now I have this method:
var create() { return new Derived1(); }
What's the return type here? Is it Derived1, or should it be Base? For that matter, should it be object?
Ok, now lets say I have this method:
void doStuff(var someBase) { someBase.Print(); }
void Main()
{
doStuff(new Derived1());
doStuff(new Derived2()); // <-- type error or not?
}
The first call, doStuff(new Derived1()), presumably forces doStuff to the type doStuff(Derived1 someBase). Let's assume for now that we infer a concrete type instead of a generic type T.
What about the second call, doStuff(new Derived1())? Is it a type error, or do we generalize to doStuff<T>(T somebase) where T : Base instead? What if we made the same call in a separate, unreferenced assembly -- the type inference algorithm would have no idea whether to use the narrow type or the more genenarlized type. So we'd end up with two different type signatures based on whether method calls originate from inside the assembly or a foreign assembly.
You can't generalize wider types based on usage of the function. You basically need to settle on a single concrete type as soon as you know which concrete type is being pass in. So in the example code above, unless you explicitly cast up to the Base type, doStuff is constrained to accept types of Derived1 and the second call is a type error.
Now the trick here is settling on a type. What happens here:
class Whatever
{
void Foo() { DoStuff(new Derived1()); }
void Bar() { DoStuff(new Derived2()); }
void DoStuff(var x) { ... }
}
What's the type of DoStuff? For that matter, we know based on the above that one of the Foo or Bar methods contain a type error, but can you tell from looking which has the error?
Its not possible to resolve the type without changing the semantics of C#. In C#, order of method declaration has no impact on compilation (or at least it shouldn't ;) ). You might say instead that the method declared first (in this case, the Foo method) determines the type, so Bar has an error.
This works, but it also changes the semantics of C#: changes in method order will change the compiled type of the method.
But let's say we went further:
// Whatever.cs
class Whatever
{
public void DoStuff(var x);
}
// Foo.cs
class Foo
{
public Foo() { new Whatever().DoStuff(new Derived1()); }
}
// Bar.cs
class Bar
{
public Bar() { new Whatever().DoStuff(new Derived2()); }
}
Now the methods is being invoked from different files. What's the type? Its not possible to decide without imposing some rules on compilation order: if Foo.cs gets compiled before Bar.cs, the type is determined by Foo.cs.
While we can impose those sorts of rules on C# to make type inference work, it would drastically change the semantics of the language.
By contrast, ML, Haskell, F#, and SML support type inference so well because they have these sorts of restrictions: you can't call methods before they're declared, the first method call to inferred functions determines the type, compilation order has an impact on type inference, etc.
The "var" keyword is used in C# and VB.NET for type inference - you basically tell the C# compiler: "you figure out what the type is".
"var" is still strongly typed - you're just too lazy yourself to write out the type and let the compiler figure it out - based on the data type of the right-hand side of the assignment.
Here, in a method parameter, the compiler has no way of figuring out what you really meant. How? What type did you really mean? There's no way for the compiler to infer the type from the method definition - therefore it's not a valid statement.
Because c# is type safe and strong type language. At any place of your program compiler always knows the type of argument you are using. var keyword was just introduced to have variables of anonymus types.
Check dynamic in C# 4
Type inference is type inference, either in local expressions or global / interprocedural. So it isn't about "not having a right hand side", because in compiler theory, a procedure call is a form of "right hand side".
C# could do this if the compiler did global type inference, but it does not.
You can use "object" if you want a parameter that accepts anything, but then you need to deal with the runtime conversion and potential exceptions yourself.
"var" in C# isn't a runtime type binding, it is a compile time feature that ends up with a very specific type, but C# type inference is limited in scope.