Why is zero chosen to be Int32 inside Ternary Operator? [duplicate] - c#

short s;
s = (EitherTrueOrFalse()) ? 0 : 1;
This fails with:
error CS0266: Cannot implicitly
convert type 'int' to 'short'. An
explicit conversion exists (are you
missing a cast?)
Can anyone explain why this is so? The only thing I can think of is that the compiler doesn't look at the second value and doesn't know the range between the two, in the case I wrote something like
short s;
s = (EitherTrueOrFalse()) ? 0 : 65000;
Correct?
The only fix is with an ugly cast?
Also, it seems C# does not have a type suffix for the short type. That's a pretty grave oversight IMO. Otherwise, that would've been a solution...

The compiler has an implicit conversion from a constant expression to various primitive types (so long as the value is within the appropriate range), but here the expression isn't constant - it's just an int expression. It's pretty much the same as:
short s;
s = CallSomeMethodReturningInt32();
as far as the compiler is concerned.
There are two options - you could cast the whole expression, or cast each of the latter two operands:
short s = (EitherTrueOrFalse()) ? (short) 0 : (short) 1;
to make the overall expression type short. In this particular case, it's a pity that there isn't a numeric literal suffix to explicitly declare a short literal. Apparently the language designers did consider this, but felt it was a relatively rare situation. (I think I'd probably agree.)
The part about implicit constant conversions is from the C# 3.0 spec section 6.1.8:
6.1.8 Implicit constant expression conversions
An implicit constant
expression conversion permits the
following conversions:
A constant-expression (§7.18) of type
int can be converted to type sbyte,
byte, short, ushort, uint, or ulong,
provided the value of the
constant-expression is within the
range of the destination type.
A
constant-expression of type long can
be converted to type ulong, provided
the value of the constant-expression
is not negative.

Because the cast is done by the compiler, not at runtime, I wouldn't call it an ugly cast, I would call it a complicated syntax:
s = (EitherTrueOrFalse()) ? (short)0 : (short)1;
I mean, this is the way it is written in C#, even if it looks ugly.
See this blog article.
See Marc Gravell's answer on that question.

I guess this has the same reason as this won't compile:
short s1 = GetShort1();
short s2 = GetShort2();
short s3 = s1 + s2;
I.e. that whenever short is used for something, it gets promoted to int.

Related

Can anyone explain this C# quirk? [duplicate]

short s;
s = (EitherTrueOrFalse()) ? 0 : 1;
This fails with:
error CS0266: Cannot implicitly
convert type 'int' to 'short'. An
explicit conversion exists (are you
missing a cast?)
Can anyone explain why this is so? The only thing I can think of is that the compiler doesn't look at the second value and doesn't know the range between the two, in the case I wrote something like
short s;
s = (EitherTrueOrFalse()) ? 0 : 65000;
Correct?
The only fix is with an ugly cast?
Also, it seems C# does not have a type suffix for the short type. That's a pretty grave oversight IMO. Otherwise, that would've been a solution...
The compiler has an implicit conversion from a constant expression to various primitive types (so long as the value is within the appropriate range), but here the expression isn't constant - it's just an int expression. It's pretty much the same as:
short s;
s = CallSomeMethodReturningInt32();
as far as the compiler is concerned.
There are two options - you could cast the whole expression, or cast each of the latter two operands:
short s = (EitherTrueOrFalse()) ? (short) 0 : (short) 1;
to make the overall expression type short. In this particular case, it's a pity that there isn't a numeric literal suffix to explicitly declare a short literal. Apparently the language designers did consider this, but felt it was a relatively rare situation. (I think I'd probably agree.)
The part about implicit constant conversions is from the C# 3.0 spec section 6.1.8:
6.1.8 Implicit constant expression conversions
An implicit constant
expression conversion permits the
following conversions:
A constant-expression (§7.18) of type
int can be converted to type sbyte,
byte, short, ushort, uint, or ulong,
provided the value of the
constant-expression is within the
range of the destination type.
A
constant-expression of type long can
be converted to type ulong, provided
the value of the constant-expression
is not negative.
Because the cast is done by the compiler, not at runtime, I wouldn't call it an ugly cast, I would call it a complicated syntax:
s = (EitherTrueOrFalse()) ? (short)0 : (short)1;
I mean, this is the way it is written in C#, even if it looks ugly.
See this blog article.
See Marc Gravell's answer on that question.
I guess this has the same reason as this won't compile:
short s1 = GetShort1();
short s2 = GetShort2();
short s3 = s1 + s2;
I.e. that whenever short is used for something, it gets promoted to int.

Why can I pass 1 as a short, but not the int variable i?

Why does the first and second Write work but not the last? Is there a way I can allow all 3 of them and detect if it was 1, (int)1 or i passed in? And really why is one allowed but the last? The second being allowed but not the last really blows my mind.
Demo to show compile error
using System;
class Program
{
public static void Write(short v) { }
static void Main(string[] args)
{
Write(1);//ok
Write((int)1);//ok
int i=1;
Write(i);//error!?
}
}
The first two are constant expressions, the last one isn't.
The C# specification allows an implicit conversion from int to short for constants, but not for other expressions. This is a reasonable rule, since for constants the compiler can ensure that the value fits into the target type, but it can't for normal expressions.
This rule is in line with the guideline that implicit conversions should be lossless.
6.1.8 Implicit constant expression conversions
An implicit constant expression conversion permits the following conversions:
A constant-expression (§7.18) of type int can be converted to type sbyte, byte, short, ushort, uint, or ulong, provided the value of the constant-expression is within the range of the destination type.
A constant-expression of type long can be converted to type ulong, provided the value of the constant-expression is not negative.
(Quoted from C# Language Specification Version 3.0)
There is no implicit conversion from int to short because of the possibility of truncation. However, a constant expression can be treated as being of the target type by the compiler.
1? Not a problem: it’s clearly a valid short value. i? Not so much – it could be some value > short.MaxValue for instance, and the compiler cannot check that in the general case.
an int literal can be implicitly converted to short. Whereas:
You cannot implicitly convert nonliteral numeric types of larger storage size to short
So, the first two work because the implicit conversion of literals is allowed.
I believe it is because you are passing in a literal/constant in the first two, but there is not automatic type conversion when passing in an integer in the third.
Edit: Someone beat me to it!
The compiler has told you why the code fails:
cannot convert `int' expression to type `short'
So here's the question you should be asking: why does this conversion fail? I googled "c# convert int short" and ended up on the MS C# page for the short keyword:
http://msdn.microsoft.com/en-us/library/ybs77ex4(v=vs.71).aspx
As this page says, implicit casts from a bigger data type to short are only allowed for literals. The compiler can tell when a literal is out of range, but not otherwise, so it needs reassurance that you've avoided an out-of-range error in your program logic. That reassurance is provided by a cast.
Write((short)i)
Because there will not be any implicit conversion between Nonliteral type to larger sized short.
Implicit conversion is only possible for constant-expression.
public static void Write(short v) { }
Where as you are passing integer value as an argument to short
int i=1;
Write(i); //Which is Nonliteral here
Converting from int -> short might result in data truncation. Thats why.
Conversion from short --> int happens implicitly but int -> short will throw compile error bcoz it might result in data truncation.

Shouldn't if(1 == null) cause an error? [duplicate]

This question already has answers here:
How can an object not be compared to null?
(5 answers)
C# okay with comparing value types to null
(11 answers)
Closed 9 years ago.
Int32 struct doesn't define operator overload method for == operator, so why doesn't the code cause compile time error:
if(1 == null) ... ;
Let's take a step back here. The question is confusing and the answers so far are not very clear as to what is going on here.
Shouldn't if(1 == null) cause an error?
No. That is legal, though dumb.
How does the compiler deal with operators like "=="? It does so by applying the overload resolution algorithm.
The first thing we must determine is whether this is a "user defined" equality operator or a "built in" equality operator. The left side is a built-in type. The right side has no type at all. Neither of those are user-defined types. Therefore no user-defined operator will be considered. Only built-in operators will be considered.
Once we know that, the question is "which built-in operators will be considered?" The built in operators are described in section 7.10 of the spec. They are equality operators on int, uint, long, ulong, decimal, float, double, any enum type, bool, char, object, string and any delegate type.
All of the equality operators on value types also have a "lifted" form that takes nullable value types.
We must now determine which of those operators are applicable. To be applicable, there must be an implicit conversion from both sides to the operator's type.
There is no implicit conversion from int to any enum type, bool, string or any delegate type, so those all vanish from consideration.
(There is not an implicit conversion from int to uint, ulong, etc, but since this is a literal one, there is an implicit conversion from 1 to uint, ulong, etc.)
There is no implicit conversion from null to any non-nullable value type, so those all disappear too.
What does that leave? That leaves the operators on object, int?, long?, uint?, ulong?, double?, float?, decimal? and char? the remaining nullable types.
We must now determine which one of those remaining applicable candidates is the unique "best" operator. An operator is better than another operator if its operand type is more specific. "object" is the least specific type, so it is eliminated. Clearly every nullable int can be converted to nullable long, but not every nullable long can be converted to nullable int, so nullable long is less specific than nullable int. So it is eliminated. We continue to eliminate operators in this manner. (In the case of the unsigned types we apply a special rule that says that if int? and uint? are both options then int? wins.)
I will spare you the details; ultimately that process leaves nullable int as the unique best operand type.
Therefore your program is interpreted as if((int?)1 == (int?)null), which clearly is legal, and will always be false.
Int32 struct doesn't define operator overload method for == operator
You are correct. What does that have to do with anything? The compiler is perfectly able to do the analysis without it. I don't understand the relationship you believe this fact has to your question. The fact is about a method that could be defined on a type, the question is about how overload resolution chooses a lifted built-in operator. Those two things are not related because "int" is not a user-defined type.
It is a value type rather than a reference type. So it will not need operators.
http://msdn.microsoft.com/en-us/library/s1ax56ch.aspx
The operators for primitive types (every numeric type except decimal) are defined by the language, not the runtime.
They compile to IL instructions rather than method calls (ceq for ==)
Have a look at MSDN blog. This contains an answer to your question.

Why can't I switch on a class with a single implicit conversion to an enum

I am wondering why it is that a single implicit conversion to an enum value doesn't work the same way it would if the conversion were to a system type. I can't see any technical reason however maybe someone smarter than I can shed some light for me.
The followoing fails to compile with, "A value of an integral type expected" and "Cannot implicitly convert type 'Test.En' to 'Test.Foo".
void test1 (){
Foo f = new Foo();
switch (f) // Comment this line to compile
//switch ((En)f) // Uncomment this line to compile
{
case En.One:
break;
}
}
//////////////////////////////////////////////////////////////////
public enum En
{
One,
Two,
Three,
}
public class Foo
{
En _myEn;
public static implicit operator En(Foo f)
{
return f._myEn;
}
}
edit from the spec:
The governing type of a switch statement is established by the switch expression. If the type of the switch expression is sbyte, byte, short, ushort, int, uint, long, ulong, char, string, or an enum-type, then that is the governing type of the switch statement. Otherwise, exactly one user-defined implicit conversion (§6.4) must exist from the type of the switch expression to one of the following possible governing types: sbyte, byte, short, ushort, int, uint, long, ulong, char, string. If no such implicit conversion exists, or if more than one such implicit conversion exists, a compile-time error occurs.
To Clarify the question, why is an enum-type not included with the list of allowed user-defined implicit conversions?
The language design notes archive does not provide a justification for this decision. This is unfortunate, since the decision was changed. As you can see, the design evolved over time:
Notes from May 26th, 1999:
What types are allowed in as the
argument to a switch statement?
integral types including char, enum
types, bool. C# also permits types
that can be implicitly and
unambiguously converted to one of the
aforementioned types. (If there are
multiple implicit conversion, then its
ambiguous and a compile-time error
occurs.) We're not sure whether we
want to support string or not.
June 7th, 1999:
We discussed enabling switch on string
arguments. We think this is a good
feature – the language can add value
by making this common case easier to
write, and the additional complexity
for the user is very low.
December 20th, 1999:
It is illegal to switch on an
expression of type bool. It is legal
to switch on an expression of an
integral type or string type. It is
legal to switch on an expression of a
type that has exactly one implicit
conversion to an integral type or
string type.
Here we have the first occurence of the rule in question. Enums seem to have disappeared. And why not use user-defined implicit conversions to enum? Was this simply an oversight? The designers did not record their thoughts.
Note that the first sentence is NOT what we implemented. It is unclear to me why the implementors did the opposite of what the design committee recommended. This comes up again in the notes several years later:
August 13, 2003:
The compiler allows switch on bool.
Don’t want to document this and add it
to the language. Don’t want to remove
it for compatibility reasons. Decided
to silently continue to support switch
on bool.
I decided that this was silly; when we produced the annotated print edition of the C# 3.0 specification, I added bool (and bool?) to the list of legal governing types.
In short: the whole thing is a bit of a mess. I have no idea why enums were in, then out, then half-in-half-out. This might have to remain one of the Mysteries of the Unknown.
Because enums are treated as integers for the purpose of switching, and as i've asked before, the compiler doesn't do multiple implicit conversions to get to a usable type, it can't figure out how to switch on foo.
My only theory as to why enums can't be used like that is that enums are not an integer type in and of themselves, and thus the compiler would have to do multiple implicit conversions to get to an integer primitive from foo.
I compiled then reflected your code and here's the results:
public static void Main()
{
Foo f = new Foo();
f._myEn = En.Three;
switch (f)
{
case En.One:
{
}
}
}
So apparently under the covers it does do an implicit conversion. :S
void test1 (){
Foo f = new Foo();
En n = f;
switch (n)
{
case En.One:
break;
}
}
EDIT: Since switch expects an integral value, writing switch(f) makes the compiler look for conversion from an instance of Foo to an integral type, which doesn't exist.
What if your class contained two enums and had implicit conversion operators for both? Or better yet, what if you had implicit conversion operators for an enum and int? Which conversion would the compiler "automatically" pick for you when you write a switch statement?
You have to explicitly specify what type of object is being used inside the switch statement. Implicit operators just tell the compiler/runtime "if you have a Foo and need an En, this code does that". It does not change the actual underlying type of the object.
Take a look at that second error message. The compiler is trying to type-coerce the enum to match the type of what is in the switch statement.
As a point of interest, how does this fare?
void test2 (){
Foo f = new Foo();
switch (En.One)
{
case f:
break;
}
}

Is this is an ExpressionTrees bug? #2

Looks like ExpressionTrees compiler should be near with the C# spec in many behaviors, but unlike C# there is no support for conversion from decimal to any enum-type:
using System;
using System.Linq.Expressions;
class Program
{
static void Main()
{
Func<decimal, ConsoleColor> converter1 = x => (ConsoleColor) x;
ConsoleColor c1 = converter1(7m); // fine
Expression<Func<decimal, ConsoleColor>> expr = x => (ConsoleColor) x;
// System.InvalidOperationException was unhandled
// No coercion operator is defined between types
// 'System.Decimal' and 'System.ConsoleColor'.
Func<decimal, ConsoleColor> converter2 = expr.Compile();
ConsoleColor c2 = converter2(7m);
}
}
Other rarely used C# explicit conversions, like double -> enum-type exists and works as explained in C# specification, but not decimal -> enum-type. Is this a bug?
It is probably a bug, and it is probably my fault. Sorry about that.
Getting decimal conversions right was one of the hardest parts of building the expression tree code correct in the compiler and the runtime because decimal conversions are actually implemented as user-defined conversions in the runtime, but treated as built-in conversions by the compiler. Decimal is the only type with this property, and therefore there are all kinds of special-purpose gear in the analyzer for these cases. In fact, there is a method called IsEnumToDecimalConversion in the analyzer to handle the special case of nullable enum to nullable decimal conversion; quite a complex special case.
Odds are good that I failed to consider some case going the other way, and generated bad code as a result. Thanks for the note; I'll send this off to the test team, and we'll see if we can get a repro going. Odds are good that if this does turn out to be a bona fide bug, this will not be fixed for C# 4 initial release; at this point we are taking only "user is electrocuted by the compiler" bugs so that the release is stable.
Not a real answer yet, I'm investigating, but the first line is compiled as:
Func<decimal, ConsoleColor> converter1 = x => (ConsoleColor)(int)x;
If you try to create an expression from the previous lambda, it will work.
EDIT : In the C# spec, §6.2.2, you can read:
An explicit enumeration conversion
between two types is processed by
treating any participating enum-type
as the underlying type of that
enum-type, and then performing an
implicit or explicit numeric
conversion between the resulting
types. For example, given an enum-type
E with and underlying type of int, a
conversion from E to byte is processed
as an explicit numeric conversion
(§6.2.1) from int to byte, and a
conversion from byte to E is processed
as an implicit numeric conversion
(§6.1.2) from byte to int.
So explicit casts from enum to decimal are handled specifically, that's why you get the nested casts (int then decimal). But I can't see why the compiler doesn't parse the lambda body the same way in both cases.

Categories

Resources