Consider following code:
public class DecimalWrapper
{
public static implicit operator DecimalWrapper(decimal x) => new();
}
[Fact]
public void Test()
{
// Why this even compiles? Since there is no implicit conversion from decimal? -> decimal
DecimalWrapper c = (decimal?)null; // variable 'c' is null!
}
I would not expect it to compile at all, since there is no implicit conversion from decimal? to decimal.
I consider it a bug or do I get something wrong?
I have seen this:
Serious bugs with lifted/nullable conversions from int, allowing conversion from decimal
but this looks different and it is old (7+ years), so the bugs should be fixed by now, but cannot be sure since all the links to bug reports are gone)... :(
I would really like to use such code in real solution (tracking of calculations), but this prevents me from it.
PS: I am compiling on Windows.
According to specification lifted conversion operators are allowed only for value type to value type conversions (and their nullable counterparts) but in the Roslyn compiler source code you can find next comment:
DELIBERATE SPEC VIOLATION:
The native compiler allows for a "lifted" conversion even when the return type of the conversion not a non-nullable value type. For example, if we have a conversion from struct S to string, then a "lifted" conversion from S? to string is considered by the native compiler to exist, with the semantics of "s.HasValue ? (string)s.Value : (string)null". The Roslyn compiler perpetuates this error for the sake of backwards compatibility.
So it seems to be an actuall bug which was introduced for backwards compatibility. And the decompilation shows exactly that behaviour.
Related
I have an explicit operator on the class MyVO, which should be non-nullable.
public class MyVO : ValueObject<MyVO>
{
public string Value { get; } // Should never be null
private MyVO(string v) => Value = v;
public static explicit operator MyVO(string vo)
{
if (string.IsNullOrWhiteSpace(vo)) throw new Exception('...');
return new MyVO(vo);
}
However, (MyVO)null will not raise an exception. The body of the method will not be run.
var myVO = (MyVO)null; // myVO will have the null value
How to make sure it's not null?
How to make sure it's not null?
By "it" I assume you mean "the result of the cast from null to MyVO". If that is not what you mean, please clarify the question.
You cannot.
An important rule of C# is a user-defined conversion never "wins" when it conflicts with a built-in conversion. It is legal to convert null to any class type, and so a cast of MyVO on the expression null will always result in a null reference. The compiler does not even consider the user-defined conversions if a built-in conversion works. (Believe me; I wrote that code!)
As D Stanley's answer correctly points out, if the null is the value of any expression of type string then the user-defined conversion is called; there is no built-in conversion from string to MyVO so the compiler looks for an applicable user-defined conversion and finds one.
Since it hurts when you do what you're doing, you should probably stop doing what you are doing. An explicit conversion is probably not the right way to implement the desired behaviour.
I guess my question should be how to make MyVO not nullable.
Upgrade to C# 8. C# 8 supports non-nullable annotations on reference types.
Note that the non-nullable annotation should be properly thought of as an annotation. The type system does not guarantee that the value of a variable annotated with a non-nullable annotation will never be observed to be null. Rather, it does its best to warn you when the code looks like it is wrong.
While we are looking at your code, I notice that you are using ValueObject<T>, which I assume you have obtained from something like
https://enterprisecraftsmanship.com/posts/value-object-better-implementation/
Let me take this opportunity to caution you that there are pitfalls to using this pattern; the constraint that you think or want to be applied to T is not the constraint that is applied to T. We often see things like this:
abstract class V<T> where T : V<T>
{
public void M(T t) { ... } // M must take an instance of its own type
}
If we have class Banana : V<Banana> then Banana.M takes as its argument a Banana, which is what we want. But now suppose we have class Giraffe : V<Banana>. In this scenario, Giraffe.M does not take a giraffe; it takes a banana, even though Giraffe has no relationship with Banana at all.
The constraint does not mean that M always takes an instance of its own class. If you are trying to construct a generic type with this kind of constraint in C#, you cannot; the C# type system is not rich enough to express that constraint.
null can be implicitly converted to any reference type, so the compiler is not using your explicit cast operator. try
string s = null;
o = (MyVO)s;
or just inline it
o = (MyVO)((string)s);
I've grabbed the source code of Nullable<T> class from the https://referencesource.microsoft.com/ and put it to the file and renamed to NullableZZ (and also the sources of NonVersionableAttribute into separate file).
When I've tried to build the next code:
static void Main(string[] args)
{
NullableZZ<int> n1 = 100;
NullableZZ<int> n2 = null;
}
I've got this error:
Error CS0037 Cannot convert null to 'NullableZZ' because it is a non-nullable value type ConsoleApp2 C:\Users\Roman2\source\repos\ConsoleApp2\ConsoleApp2\Program.cs
Why the C# compiler does not want to compile it? Has it some "tricks" to compile its "own" version of Nullable<T>?
Why the C# compiler does not want to compile it?
Because it doesn't have any specific knowledge of your class, but it does have specific knowledge of Nullable<T>.
Has it some "tricks" to compile its "own" version of Nullable<T>?
Yes. The null literal is convertible to Nullable<T> for any non-nullable value type T, and also to any reference type. It is not convertible to NullableZZ<int>. Also, int? is effectively shorthand for Nullable<int> - it has special treatment.
Basically look through the specification (e.g. the ECMA C# 5 spec) and observe everywhere that it talks about Nullable<T>. You'll find lots of places that it's mentioned.
Nullable value types have support in the framework, the language and the CLR:
The Nullable<T> type has to exist in the framework
The language has support as described in this answer
The CLR has support in terms of validating generic constraints and also boxing (where the null value of a nullable value type boxes to a null reference)
Why does the first and second Write work but not the last? Is there a way I can allow all 3 of them and detect if it was 1, (int)1 or i passed in? And really why is one allowed but the last? The second being allowed but not the last really blows my mind.
Demo to show compile error
using System;
class Program
{
public static void Write(short v) { }
static void Main(string[] args)
{
Write(1);//ok
Write((int)1);//ok
int i=1;
Write(i);//error!?
}
}
The first two are constant expressions, the last one isn't.
The C# specification allows an implicit conversion from int to short for constants, but not for other expressions. This is a reasonable rule, since for constants the compiler can ensure that the value fits into the target type, but it can't for normal expressions.
This rule is in line with the guideline that implicit conversions should be lossless.
6.1.8 Implicit constant expression conversions
An implicit constant expression conversion permits the following conversions:
A constant-expression (§7.18) of type int can be converted to type sbyte, byte, short, ushort, uint, or ulong, provided the value of the constant-expression is within the range of the destination type.
A constant-expression of type long can be converted to type ulong, provided the value of the constant-expression is not negative.
(Quoted from C# Language Specification Version 3.0)
There is no implicit conversion from int to short because of the possibility of truncation. However, a constant expression can be treated as being of the target type by the compiler.
1? Not a problem: it’s clearly a valid short value. i? Not so much – it could be some value > short.MaxValue for instance, and the compiler cannot check that in the general case.
an int literal can be implicitly converted to short. Whereas:
You cannot implicitly convert nonliteral numeric types of larger storage size to short
So, the first two work because the implicit conversion of literals is allowed.
I believe it is because you are passing in a literal/constant in the first two, but there is not automatic type conversion when passing in an integer in the third.
Edit: Someone beat me to it!
The compiler has told you why the code fails:
cannot convert `int' expression to type `short'
So here's the question you should be asking: why does this conversion fail? I googled "c# convert int short" and ended up on the MS C# page for the short keyword:
http://msdn.microsoft.com/en-us/library/ybs77ex4(v=vs.71).aspx
As this page says, implicit casts from a bigger data type to short are only allowed for literals. The compiler can tell when a literal is out of range, but not otherwise, so it needs reassurance that you've avoided an out-of-range error in your program logic. That reassurance is provided by a cast.
Write((short)i)
Because there will not be any implicit conversion between Nonliteral type to larger sized short.
Implicit conversion is only possible for constant-expression.
public static void Write(short v) { }
Where as you are passing integer value as an argument to short
int i=1;
Write(i); //Which is Nonliteral here
Converting from int -> short might result in data truncation. Thats why.
Conversion from short --> int happens implicitly but int -> short will throw compile error bcoz it might result in data truncation.
Why is this a compile time error?
public TCastTo CastMe<TSource, TCastTo>(TSource i)
{
return (TCastTo)i;
}
Error:
Cannot convert type 'TSource' to 'TCastTo'
And why is this a runtime error?
public TCastTo CastMe<TSource, TCastTo>(TSource i)
{
return (TCastTo)(object)i;
}
int a = 4;
long b = CastMe<int, long>(a); // InvalidCastException
// this contrived example works
int aa = 4;
int bb = CastMe<int, int>(aa);
// this also works, the problem is limited to value types
string s = "foo";
object o = CastMe<string, object>(s);
I've searched SO and the internet for an answer to this and found lots of explanations on similar generic related casting issues, but I can't find anything on this particular simple case.
Why is this a compile time error?
The problem is that every possible combination of value types has different rules for what a cast means. Casting a 64 bit double to a 16 bit int is completely different code from casting a decimal to a float, and so on. The number of possibilities is enormous. So think like the compiler. What code is the compiler supposed to generate for your program?
The compiler would have to generate code that starts the compiler again at runtime, does a fresh analysis of the types, and dynamically emits the appropriate code.
That seems like perhaps more work and less performance than you expected to get with generics, so we simply outlaw it. If what you really want is for the compiler to start up again and do an analysis of the types, use "dynamic" in C# 4; that's what it does.
And why is this a runtime error?
Same reason.
A boxed int may only be unboxed to int (or int?), for the same reason as above; if the CLR tried to do every possible conversion from a boxed value type to every other possible value type then essentially it has to run a compiler again at runtime. That would be unexpectedly slow.
So why is it not an error for reference types?
Because every reference type conversion is the same as every other reference type conversion: you interrogate the object to see if it is derived from or identical to the desired type. If it's not, you throw an exception (if doing a cast) or result in null/false (if using the "as/is" operators). The rules are consistent for reference types in a way that they are not for value types. Remember reference types know their own type. Value types do not; with value types, the variable doing the storage is the only thing that knows the type semantics that apply to those bits. Value types contain their values and no additional information. Reference types contain their values plus lots of extra data.
For more information see my article on the subject:
http://ericlippert.com/2009/03/03/representation-and-identity/
C# uses one cast syntax for multiple different underlying operations:
upcast
downcast
boxing
unboxing
numeric conversion
user-defined conversion
In generic context, the compiler has no way of knowing which of those is correct, and they all generate different MSIL, so it bails out.
By writing return (TCastTo)(object)i; instead, you force the compiler to do an upcast to object, followed by a downcast to TCastTo. The compiler will generate code, but if that wasn't the right way to convert the types in question, you'll get a runtime error.
Code Sample:
public static class DefaultConverter<TInput, TOutput>
{
private static Converter<TInput, TOutput> cached;
static DefaultConverter()
{
ParameterExpression p = Expression.Parameter(typeof(TSource));
cached = Expression.Lambda<Converter<TSource, TCastTo>(Expression.Convert(p, typeof(TCastTo), p).Compile();
}
public static Converter<TInput, TOutput> Instance { return cached; }
}
public static class DefaultConverter<TOutput>
{
public static TOutput ConvertBen<TInput>(TInput from) { return DefaultConverter<TInput, TOutput>.Instance.Invoke(from); }
public static TOutput ConvertEric(dynamic from) { return from; }
}
Eric's way sure is shorter, but I think mine should be faster.
The compile error is caused because TSource cannot be implicitly cast to TCastTo. The two types may share a branch on their inheritance tree, but there is no guarantee. If you wanted to call only types that did share an ancestor, you should modify the CastMe() signature to use the ancestor type instead of generics.
The runtime error example avoids the error in your first example by first casting the TSource i to an object, something all objects in C# derive from. While the compiler doesn't complain (because object -> something that derives from it, could be valid), the behaviour of casting via (Type)variable syntax will throw if the cast is invalid. (The same problem that the compiler prevented from happening in example 1).
Another solution, which does something similar to what you're looking for...
public static T2 CastTo<T, T2>(T input, Func<T, T2> convert)
{
return convert(input);
}
You'd call it like this.
int a = 314;
long b = CastTo(a, i=>(long)i);
Hopefully this helps.
Looks like ExpressionTrees compiler should be near with the C# spec in many behaviors, but unlike C# there is no support for conversion from decimal to any enum-type:
using System;
using System.Linq.Expressions;
class Program
{
static void Main()
{
Func<decimal, ConsoleColor> converter1 = x => (ConsoleColor) x;
ConsoleColor c1 = converter1(7m); // fine
Expression<Func<decimal, ConsoleColor>> expr = x => (ConsoleColor) x;
// System.InvalidOperationException was unhandled
// No coercion operator is defined between types
// 'System.Decimal' and 'System.ConsoleColor'.
Func<decimal, ConsoleColor> converter2 = expr.Compile();
ConsoleColor c2 = converter2(7m);
}
}
Other rarely used C# explicit conversions, like double -> enum-type exists and works as explained in C# specification, but not decimal -> enum-type. Is this a bug?
It is probably a bug, and it is probably my fault. Sorry about that.
Getting decimal conversions right was one of the hardest parts of building the expression tree code correct in the compiler and the runtime because decimal conversions are actually implemented as user-defined conversions in the runtime, but treated as built-in conversions by the compiler. Decimal is the only type with this property, and therefore there are all kinds of special-purpose gear in the analyzer for these cases. In fact, there is a method called IsEnumToDecimalConversion in the analyzer to handle the special case of nullable enum to nullable decimal conversion; quite a complex special case.
Odds are good that I failed to consider some case going the other way, and generated bad code as a result. Thanks for the note; I'll send this off to the test team, and we'll see if we can get a repro going. Odds are good that if this does turn out to be a bona fide bug, this will not be fixed for C# 4 initial release; at this point we are taking only "user is electrocuted by the compiler" bugs so that the release is stable.
Not a real answer yet, I'm investigating, but the first line is compiled as:
Func<decimal, ConsoleColor> converter1 = x => (ConsoleColor)(int)x;
If you try to create an expression from the previous lambda, it will work.
EDIT : In the C# spec, §6.2.2, you can read:
An explicit enumeration conversion
between two types is processed by
treating any participating enum-type
as the underlying type of that
enum-type, and then performing an
implicit or explicit numeric
conversion between the resulting
types. For example, given an enum-type
E with and underlying type of int, a
conversion from E to byte is processed
as an explicit numeric conversion
(§6.2.1) from int to byte, and a
conversion from byte to E is processed
as an implicit numeric conversion
(§6.1.2) from byte to int.
So explicit casts from enum to decimal are handled specifically, that's why you get the nested casts (int then decimal). But I can't see why the compiler doesn't parse the lambda body the same way in both cases.