int promotion to unsigned int in C and C# - c#

Have a look at this C code:
int main()
{
unsigned int y = 10;
int x = -2;
if (x > y)
printf("x is greater");
else
printf("y is greater");
return 0;
}
/*Output: x is greater.*/
I understand why the output is x is greater, because when the computer compares both of them, x is promoted to an unsigned integer type.
When x is promoted to unsigned integer, -2 becomes 65534 which is definitely greater than 10.
But why in C#, does the equivalent code give the opposite result?
public static void Main(String[] args)
{
uint y = 10;
int x = -2;
if (x > y)
{
Console.WriteLine("x is greater");
}
else
{
Console.WriteLine("y is greater");
}
}
//Output: y is greater.

In C#, both uint and int get promoted to a long before the comparison.
This is documented in 4.1.5 Integral types of the C# language spec:
For the binary +, –, *, /, %, &, ^, |, ==, !=, >, <, >=, and <= operators, the operands are converted to type T, where T is the first of int, uint, long, and ulong that can fully represent all possible values of both operands. The operation is then performed using the precision of type T, and the type of the result is T (or bool for the relational operators). It is not permitted for one operand to be of type long and the other to be of type ulong with the binary operators.
Since long is the first type that can fully represent all int and uint values, the variables are both converted to long, then compared.

In C#, in a comparison between an int and uint, both values are promoted to long values.
"Otherwise, if either operand is of type uint and the other operand is of type sbyte, short, or int, both operands are converted to type long."
http://msdn.microsoft.com/en-us/library/aa691330(v=vs.71).aspx

C and C# have differing views for what integral types represent. See my answer https://stackoverflow.com/a/18796084/363751 for some discussion about C's view. In C#, whether integers represent numbers or members of an abstract algebraic ring is determined to some extent by whether "checked arithmetic" is turned on or off, but that simply controls whether out-of-bounds computation should throw exceptions. In general, the .NET framework regards all integer types as representing numbers, and aside from allowing some out-of-bounds computations to be performed without throwing exceptions C# follows its lead.
If unsigned types represent members of an algebraic ring, adding e.g. -5 to an unsigned 2 should yield an unsigned value which, when added to 5, will yield 2. If they represent numbers, then adding a -5 to an unsigned 2 should if possible yield a representation of the number -3. Since promoting the operands to Int64 will allow that to happen, that's what C# does.
Incidentally, I dislike the notions that operators (especially relational operators!) should always work by promoting their operands to a common compatible type, should return a result of that type, and should accept without squawking any combination of operators which can be promoted to a common type. Given float f; long l;, there are at least three sensible meanings for a comparison f==l [it could cast l to float, it could cast l and f to double, or it could ensure that f is a whole number which can be cast to long, and that when cast it equals l]. Alternatively, a compiler could simply reject such a mixed comparison. If I had by druthers, compilers would be enjoined from casting the operands to relational operators except in cases where there was only one plausible meaning. Requiring that things which are implicitly convertible everywhere must be directly comparable is IMHO unhelpful.

Related

sbyte and if operator

Why the first statement is allowed but not the second in the below sample code?
sbyte test1 = true ? 1 : -1; // Allowed
sbyte test2 = "a".Equals("b") ? 1 : -1; // Not allowed
I checked that all .Equals(..) overloads for string return a bool.
The first statement includes a constant expression that can be evaluated at a compile-time. It's equivalent to this:
sbyte test1 = 1;
Technically, this is an assignment of an int literal (1) to an sbyte variable. But the compiler is smart enough to figure out that the value is small enough to fit into an sbyte range and allows an implicit conversion, i.e. you don't need to cast it to int.
The second statement includes a method call and those are only evaluated at runtime. In other words, the compiler isn't smart enough to simplify an expression. The only thing it knows is that this impression returns an unknown int value and those should be converted explicitly. For example, like this:
sbyte test2 = (sbyte) ("a".Equals("b") ? 1 : -1);
All of this is explained in the C# specification. See, Implicit constant expression conversions:
A constant_expression (Constant expressions) of type int can be converted to type sbyte, byte, short, ushort, uint, or ulong, provided the value of the constant_expression is within the range of the destination type.

Operator ">" cannot be applied to type 'ulong' and 'int'

I'm curious to know why the C# compiler only gives me an error message for the second if statement.
enum Permissions : ulong
{
ViewListItems = 1L,
}
public void Method()
{
int mask = 138612833;
int compare = 32;
if (mask > 0 & (ulong)Permissions.ViewListItems > 32)
{
//Works
}
if (mask > 0 & (ulong)Permissions.ViewListItems > compare)
{
//Operator '>' cannot be applied to operands of type 'ulong' and 'int'
}
}
I've been experimenting with this, using ILSpy to examine the output, and this is what I've discovered.
Obviously in your second case this is an error - you can't compare a ulong and an int because there isn't a type you can coerce both to. A ulong might be too big for a long, and an int might be negative.
In your first case, however, the compiler is being clever. It realises that const 1 > const 32 is never true, and doesn't include your if statement in the compiled output at all. (It should give a warning for unreachable code.) It's the same if you define and use a const int rather than a literal, or even if you cast the literal explicitly (i.e. (int)32).
But then isn't the compiler successfully comparing a ulong with an int, which we just said was impossible?
Apparently not. So what is going on?
Try instead to do something along the following lines. (Taking input and writing output so the compiler doesn't compile anything away.)
const int thirtytwo = 32;
static void Main(string[] args)
{
ulong x = ulong.Parse(Console.ReadLine());
bool gt = x > thirtytwo;
Console.WriteLine(gt);
}
This will compile, even though the ulong is a variable, and even though the result isn't known at compile time. Take a look at the output in ILSpy:
private static void Main(string[] args)
{
ulong x = ulong.Parse(Console.ReadLine());
bool gt = x > 32uL; /* Oh look, a ulong. */
Console.WriteLine(gt);
}
So, the compiler is in fact treating your const int as a ulong. If you make thirtytwo = -1, the code fails to compile, even though we then know that gt will always be true. The compiler itself can't compare a ulong to an int.
Also note that if you make x a long instead of a ulong, the compiler generates 32L rather than 32 as an integer, even though it doesn't have to. (You can compare an int and a long at runtime.)
This points to the compiler not treating 32 as a ulong in the first case because it has to, merely because it can match the type of x. It's saving the runtime from having to coerce the constant, and this is just a bonus when the coercion should by rights not be possible.
It's not the CLR giving this error message it's the compiler.
In your first example the compiler treats 32 as ulong (or a type that's implicitly convertible to ulong eg uint) whereas in your second example you've explicitly declared the type as an int. There is no overload of the > operator that accepts an ulong and an int and hence you get a compiler error.
rich.okelly and rawling's answers are correct as to why you cannot compare them directly. You can use the Convert class's ToUInt64 method to promote the int.
if (mask > 0 & (ulong)Permissions.ViewListItems > Convert.ToUInt64(compare))
{
}

Convert unknown boxed simple value types (char, int, ulong, etc.) to UInt64

Expanding on Jon Skeet's answer to This Previous Question. Skeet doesn't address the failure that occurs when negative values and two's complement values enter the picture.
In short, I want to convert any simple type (held in an unknown boxed object) to System.UInt64 so I can work with the underlying binary representation.
Why do I want to do this? See the explanation at the bottom.
The example below shows the cases where Convert.ToInt64(object) and Convert.ToUInt64(object) both break (OverflowException).
There are only two causes for the OverflowExceptions below:
-10UL causes an exception when converting to Int64 because the negative value casts to 0xfffffffffffffff6 (in the unchecked context), which is a positive number larger than Int64.MaxValue. I want this to convert to -10L.
When converting to UInt64, signed types holding negative values cause an exception because -10 is less than UInt64.MinValue. I want these to convert to their true two's complement value (which is 0xffffffffffffffff6). Unsigned types don't truly hold the negative value -10 because it is converted to two's complement in the unchecked context; thus, no exception occurs with unsigned types.
The kludge solution would seem to be conversion to Int64 followed by an unchecked cast to UInt64. This intermediate cast would be easier because only one instance causes an exception for Int64 versus eight failures when converting directly to UInt64.
Note: The example uses an unchecked context only for the purpose of forcing negative values into unsigned types during boxing (which creates a positive two's complement equivalent value). This unchecked context is not a part of the problem at hand.
using System;
enum DumbEnum { Negative = -10, Positive = 10 };
class Test
{
static void Main()
{
unchecked
{
Check((sbyte)10);
Check((byte)10);
Check((short)10);
Check((ushort)10);
Check((int)10);
Check((uint)10);
Check((long)10);
Check((ulong)10);
Check((char)'\u000a');
Check((float)10.1);
Check((double)10.1);
Check((bool)true);
Check((decimal)10);
Check((DumbEnum)DumbEnum.Positive);
Check((sbyte)-10);
Check((byte)-10);
Check((short)-10);
Check((ushort)-10);
Check((int)-10);
Check((uint)-10);
Check((long)-10);
//Check((ulong)-10); // OverflowException
Check((float)-10);
Check((double)-10);
Check((bool)false);
Check((decimal)-10);
Check((DumbEnum)DumbEnum.Negative);
CheckU((sbyte)10);
CheckU((byte)10);
CheckU((short)10);
CheckU((ushort)10);
CheckU((int)10);
CheckU((uint)10);
CheckU((long)10);
CheckU((ulong)10);
CheckU((char)'\u000a');
CheckU((float)10.1);
CheckU((double)10.1);
CheckU((bool)true);
CheckU((decimal)10);
CheckU((DumbEnum)DumbEnum.Positive);
//CheckU((sbyte)-10); // OverflowException
CheckU((byte)-10);
//CheckU((short)-10); // OverflowException
CheckU((ushort)-10);
//CheckU((int)-10); // OverflowException
CheckU((uint)-10);
//CheckU((long)-10); // OverflowException
CheckU((ulong)-10);
//CheckU((float)-10.1); // OverflowException
//CheckU((double)-10.1); // OverflowException
CheckU((bool)false);
//CheckU((decimal)-10); // OverflowException
//CheckU((DumbEnum)DumbEnum.Negative); // OverflowException
}
}
static void Check(object o)
{
Console.WriteLine("Type {0} converted to Int64: {1}",
o.GetType().Name, Convert.ToInt64(o));
}
static void CheckU(object o)
{
Console.WriteLine("Type {0} converted to UInt64: {1}",
o.GetType().Name, Convert.ToUInt64(o));
}
}
WHY?
Why do I want to be able to convert all these value types to and from UInt64? Because I have written a class library that converts structs or classes to bit fields packed into a single UInt64 value.
Example: Consider the DiffServ field in every IP packet header, which is composed of a number of binary bit fields:
Using my class library, I can create a struct to represent the DiffServ field. I created a BitFieldAttribute which indicates which bits belong where in the binary representation:
struct DiffServ : IBitField
{
[BitField(3,0)]
public PrecedenceLevel Precedence;
[BitField(1,3)]
public bool Delay;
[BitField(1,4)]
public bool Throughput;
[BitField(1,5)]
public bool Reliability;
[BitField(1,6)]
public bool MonetaryCost;
}
enum PrecedenceLevel
{
Routine, Priority, Immediate, Flash, FlashOverride, CriticEcp,
InternetworkControl, NetworkControl
}
My class library can then convert an instance of this struct to and from its proper binary representation:
// Create an arbitrary DiffServe instance.
DiffServ ds = new DiffServ();
ds.Precedence = PrecedenceLevel.Immediate;
ds.Throughput = true;
ds.Reliability = true;
// Convert struct to value.
long dsValue = ds.Pack();
// Create struct from value.
DiffServ ds2 = Unpack<DiffServ>(0x66);
To accomplish this, my class library looks for fields/properties decorated with the BitFieldAttribute. Getting and setting members retrieves an object containing the boxed value type (int, bool, enum, etc.) Therefore, I need to unbox any value type and convert it to it's bare-bones binary representation so that the bits can be extracted and packed into a UInt64 value.
I'm going to post my best solution as fodder for the masses.
These conversions eliminate all exceptions (except for very large float, double, decimal values which do not fit in 64-bit integers) when unboxing an unknown simple value type held in object o:
long l = o is ulong ? (long)(ulong)o : Convert.ToInt64(o));
ulong u = o is ulong ? (ulong)o : (ulong)Convert.ToInt64(o));
Any improvements to this will be welcomed.

C# 0 (minus) uint = unsigned result?

public void Foo(double d){
// when called below, d == 2^32-1
...
}
public void Bar(){
uint ui = 1;
Foo( 0 - ui );
}
I would expect both 0 and ui to be promoted to signed longs here.
True, with the 0 literal it is knowable at compile time that a cast to uint is safe,
but I suppose this all just seems wrong. At least a warning should be issued.
Thanks!
Does the language spec cover a semi-ambiguous case like this?
Why would anything be promoted to long? The spec (section 7.8.5) lists four operators for integer subtraction:
int operator-(int x, int y);
uint operator-(uint x, uint y);
long operator-(long x, long y);
ulong operator-(ulong x, ulong y);
Given that the constant value 0 is implicitly convertible to uint, but the uint value ui is not implicitly convertible to int, the second operator is chosen according to the binary operator overload resolution steps described in section 7.3.4.
(Is it possible that you were unaware of the implicit constant expression conversion from 0 to uint and that that was the confusing part? See section 6.1.9 of the C# 4 spec for details.)
Following section 7.3.4 (which then refers to 7.3.5, and 7.5.3) is slightly tortuous, but I believe it's well-defined, and not at all ambiguous.
If it's the overflow that bother you, would expect this to fail as well?
int x = 10;
int y = int.MaxValue - 5;
int z = x + y;
If not, what's really the difference here?
It's the int that is being cast to uint to perform substraction from 0 (which is implicitly interpreted by the compiler as uint). Note that int to uint is an implicit conversion hence no warning. There is nothing wrong with your code... except that uint is not CLS-compilant. You can read why here. More info on CLS-compilant code on MSDN
In a checked context, if the difference is outside the range of the result type, a System.OverflowException is thrown. In an unchecked context, overflows are not reported and any significant high-order bits outside the range of the result type are discarded.
http://msdn.microsoft.com/en-us/library/aa691376(v=vs.71).aspx
Technically, doing the following:
double d = checked(0-ui);
Will result in a throw of System.OverflowException which is perhaps what you are expecting, but according to the spec since this is not checked the overflow is not reported.

Why does this implicit conversion from int to uint work?

Using Casting null doesn't compile as inspiration, and from Eric Lippert's comment:
That demonstrates an interesting case. "uint x = (int)0;" would
succeed even though int is not implicitly convertible to uint.
We know this doesn't work, because object can't be assigned to string:
string x = (object)null;
But this does, although intuitively it shouldn't:
uint x = (int)0;
Why does the compiler allow this case, when int isn't implicitly convertible to uint?
Integer constant conversions are treated as very special by the C# language; here's section 6.1.9 of the specification:
A constant expression of type int can be converted to type sbyte, byte, short, ushort, uint, or ulong, provided the value of the constant-expression is within the range of the destination type. A constant expression of type long can be converted to type ulong, provided the value of the constant expression is not negative.
This permits you to do things like:
byte x = 64;
which would otherwise require an ugly explicit conversion:
byte x = (byte)64; // gross
The following code wil fail with the message "Cannot implicitly convert type 'int' to 'uint'. An explicit conversion exists (are you missing a cast?)"
int y = 0;
uint x = (int)y;
And this will fail with: "Constant value '-1' cannot be converted to a 'uint'"
uint x = (int)-1;
So the only reason uint x = (int)0; works is because the compiler sees that 0 (or any other value > 0) is a compile time constant that can be converted into a uint
In general compilers have 4 steps in which the code is converted.
Text is tokenized > Tokens are parsed > An AST is built + linking > the AST is converted to the target language.
The evaluation of constants such as numbers and strings occurs as a first step and the compiler probably treats 0 as a valid token and ignores the cast.

Categories

Resources