Casting a hex number under unchecked scope [duplicate] - c#

I want to use HEX number to assign a value to an int:
int i = 0xFFFFFFFF; // effectively, set i to -1
Understandably, compiler complains.
Question, how do I make above work?
Here is why I need this.
WritableBitmap class exposes pixels array as int[]. So if I want to set pixel to Blue, I would say: 0xFF0000FF (ARGB) (-16776961)
Plus I am curious if there is an elegant, compile time solution.
I know there is a:
int i = BitConverter.ToInt32(new byte[] { 0xFF, 0x00, 0x00, 0xFF }, 0);
but it is neither elegant, nor compile time.

Give someone a fish and you feed them for a day. Teach them to pay attention to compiler error messages and they don't have to ask questions on the internet that are answered by the error message.
int i = 0xFFFFFFFF;
produces:
Cannot implicitly convert type 'uint' to 'int'. An explicit conversion exists
(are you missing a cast?)
Pay attention to the error message and try adding a cast:
int i = (int)0xFFFFFFFF;
Now the error is:
Constant value '4294967295' cannot be converted to a 'int'
(use 'unchecked' syntax to override)
Again, pay attention to the error message. Use the unchecked syntax.
int i = unchecked((int)0xFFFFFFFF);
Or
unchecked
{
int i = (int)0xFFFFFFFF;
}
And now, no error.
As an alternative to using the unchecked syntax, you could specify /checked- on the compiler switches, if you like to live dangerously.
Bonus question:
What makes the literal a uint in the first place?
The type of an integer literal does not depend on whether it is hex or decimal. Rather:
If a decimal literal has the U or L suffixes then it is uint, long or ulong, depending on what combination of suffixes you choose.
If it does not have a suffix then we take the value of the literal and see if it fits into the range of an int, uint, long or ulong. Whichever one matches first on that list is the type of the expression.
In this case the hex literal has a value that is outside the range of int but inside the range of uint, so it is treated as a uint.

You just need an unchecked cast:
unchecked
{
int i = (int)0xFFFFFFFF;
Console.WriteLine("here it is: {0}", i);
}

The unchecked syntax seems a bit gar'ish (!) when compared to the various single-letter numerical suffixes available.
So I tried for a shellfish:
static public class IntExtMethods
{
public static int ui(this uint a)
{
return unchecked((int)a);
}
}
Then
int i = 0xFFFFFFFF.ui();
Because the lake has more fish.
Note: it's not a constant expression, so it can't be used to initialize an enum field for instance.

Related

bitwise NOT on const byte fields in C#

I realized that if I have a field or variable of type 'byte', I can apply bitwise NOT(~) on it and cast it to byte. However, if the field is 'const byte', I can still apply bitwise NOT(~), but I cannot cast it to byte. For example,
This compiles:
class Program
{
byte b = 7;
void Method()
{
byte bb = (byte) ~b;
}
}
But this has a compile error ("Constant value '-8' cannot be converted to a 'byte' "):
class Program
{
const byte b = 7;
void Method()
{
byte bb = (byte) ~b;
}
}
I wonder why?
Because the ~ operator is only predefined for int, uint, long, and ulong. Your first sample implicitly casts b to an int, performs the negation, then explicitly casts back to a byte.
In the second example, b is a constant, so the compiler is also inlining the negation, effectively making a constant int with a value of -8 (the signed twos-complement of 7). And since a constant negative value can't be cast to a byte (without adding an unchecked context), you get a compilation error.
To avoid the error just store the result in a non-constant int variable:
const byte b = 7;
void Main()
{
int i = ~b;
byte bb = (byte)i;
}
There is no ~ operator defined for byte. It's defined for int. The byte is implicitly converted to an int, and that int is NOT-ed. That resulting int is not in the range of a byte (0 - 255, inclusive), so it can only be converted to a byte at compile-time via an unchecked cast:
byte bb = unchecked((byte)~b);
The second program doesn't compile because, due to the use of compile time constants, it's able to validate the improper conversion at compile time. The compiler cannot make this assertion with non-compile time constant values.
I can't explain the difference, but the simple solution for your second program is to mark it unchecked like so:
byte bb = unchecked((byte)~b);

Operator ">" cannot be applied to type 'ulong' and 'int'

I'm curious to know why the C# compiler only gives me an error message for the second if statement.
enum Permissions : ulong
{
ViewListItems = 1L,
}
public void Method()
{
int mask = 138612833;
int compare = 32;
if (mask > 0 & (ulong)Permissions.ViewListItems > 32)
{
//Works
}
if (mask > 0 & (ulong)Permissions.ViewListItems > compare)
{
//Operator '>' cannot be applied to operands of type 'ulong' and 'int'
}
}
I've been experimenting with this, using ILSpy to examine the output, and this is what I've discovered.
Obviously in your second case this is an error - you can't compare a ulong and an int because there isn't a type you can coerce both to. A ulong might be too big for a long, and an int might be negative.
In your first case, however, the compiler is being clever. It realises that const 1 > const 32 is never true, and doesn't include your if statement in the compiled output at all. (It should give a warning for unreachable code.) It's the same if you define and use a const int rather than a literal, or even if you cast the literal explicitly (i.e. (int)32).
But then isn't the compiler successfully comparing a ulong with an int, which we just said was impossible?
Apparently not. So what is going on?
Try instead to do something along the following lines. (Taking input and writing output so the compiler doesn't compile anything away.)
const int thirtytwo = 32;
static void Main(string[] args)
{
ulong x = ulong.Parse(Console.ReadLine());
bool gt = x > thirtytwo;
Console.WriteLine(gt);
}
This will compile, even though the ulong is a variable, and even though the result isn't known at compile time. Take a look at the output in ILSpy:
private static void Main(string[] args)
{
ulong x = ulong.Parse(Console.ReadLine());
bool gt = x > 32uL; /* Oh look, a ulong. */
Console.WriteLine(gt);
}
So, the compiler is in fact treating your const int as a ulong. If you make thirtytwo = -1, the code fails to compile, even though we then know that gt will always be true. The compiler itself can't compare a ulong to an int.
Also note that if you make x a long instead of a ulong, the compiler generates 32L rather than 32 as an integer, even though it doesn't have to. (You can compare an int and a long at runtime.)
This points to the compiler not treating 32 as a ulong in the first case because it has to, merely because it can match the type of x. It's saving the runtime from having to coerce the constant, and this is just a bonus when the coercion should by rights not be possible.
It's not the CLR giving this error message it's the compiler.
In your first example the compiler treats 32 as ulong (or a type that's implicitly convertible to ulong eg uint) whereas in your second example you've explicitly declared the type as an int. There is no overload of the > operator that accepts an ulong and an int and hence you get a compiler error.
rich.okelly and rawling's answers are correct as to why you cannot compare them directly. You can use the Convert class's ToUInt64 method to promote the int.
if (mask > 0 & (ulong)Permissions.ViewListItems > Convert.ToUInt64(compare))
{
}

Convert unknown boxed simple value types (char, int, ulong, etc.) to UInt64

Expanding on Jon Skeet's answer to This Previous Question. Skeet doesn't address the failure that occurs when negative values and two's complement values enter the picture.
In short, I want to convert any simple type (held in an unknown boxed object) to System.UInt64 so I can work with the underlying binary representation.
Why do I want to do this? See the explanation at the bottom.
The example below shows the cases where Convert.ToInt64(object) and Convert.ToUInt64(object) both break (OverflowException).
There are only two causes for the OverflowExceptions below:
-10UL causes an exception when converting to Int64 because the negative value casts to 0xfffffffffffffff6 (in the unchecked context), which is a positive number larger than Int64.MaxValue. I want this to convert to -10L.
When converting to UInt64, signed types holding negative values cause an exception because -10 is less than UInt64.MinValue. I want these to convert to their true two's complement value (which is 0xffffffffffffffff6). Unsigned types don't truly hold the negative value -10 because it is converted to two's complement in the unchecked context; thus, no exception occurs with unsigned types.
The kludge solution would seem to be conversion to Int64 followed by an unchecked cast to UInt64. This intermediate cast would be easier because only one instance causes an exception for Int64 versus eight failures when converting directly to UInt64.
Note: The example uses an unchecked context only for the purpose of forcing negative values into unsigned types during boxing (which creates a positive two's complement equivalent value). This unchecked context is not a part of the problem at hand.
using System;
enum DumbEnum { Negative = -10, Positive = 10 };
class Test
{
static void Main()
{
unchecked
{
Check((sbyte)10);
Check((byte)10);
Check((short)10);
Check((ushort)10);
Check((int)10);
Check((uint)10);
Check((long)10);
Check((ulong)10);
Check((char)'\u000a');
Check((float)10.1);
Check((double)10.1);
Check((bool)true);
Check((decimal)10);
Check((DumbEnum)DumbEnum.Positive);
Check((sbyte)-10);
Check((byte)-10);
Check((short)-10);
Check((ushort)-10);
Check((int)-10);
Check((uint)-10);
Check((long)-10);
//Check((ulong)-10); // OverflowException
Check((float)-10);
Check((double)-10);
Check((bool)false);
Check((decimal)-10);
Check((DumbEnum)DumbEnum.Negative);
CheckU((sbyte)10);
CheckU((byte)10);
CheckU((short)10);
CheckU((ushort)10);
CheckU((int)10);
CheckU((uint)10);
CheckU((long)10);
CheckU((ulong)10);
CheckU((char)'\u000a');
CheckU((float)10.1);
CheckU((double)10.1);
CheckU((bool)true);
CheckU((decimal)10);
CheckU((DumbEnum)DumbEnum.Positive);
//CheckU((sbyte)-10); // OverflowException
CheckU((byte)-10);
//CheckU((short)-10); // OverflowException
CheckU((ushort)-10);
//CheckU((int)-10); // OverflowException
CheckU((uint)-10);
//CheckU((long)-10); // OverflowException
CheckU((ulong)-10);
//CheckU((float)-10.1); // OverflowException
//CheckU((double)-10.1); // OverflowException
CheckU((bool)false);
//CheckU((decimal)-10); // OverflowException
//CheckU((DumbEnum)DumbEnum.Negative); // OverflowException
}
}
static void Check(object o)
{
Console.WriteLine("Type {0} converted to Int64: {1}",
o.GetType().Name, Convert.ToInt64(o));
}
static void CheckU(object o)
{
Console.WriteLine("Type {0} converted to UInt64: {1}",
o.GetType().Name, Convert.ToUInt64(o));
}
}
WHY?
Why do I want to be able to convert all these value types to and from UInt64? Because I have written a class library that converts structs or classes to bit fields packed into a single UInt64 value.
Example: Consider the DiffServ field in every IP packet header, which is composed of a number of binary bit fields:
Using my class library, I can create a struct to represent the DiffServ field. I created a BitFieldAttribute which indicates which bits belong where in the binary representation:
struct DiffServ : IBitField
{
[BitField(3,0)]
public PrecedenceLevel Precedence;
[BitField(1,3)]
public bool Delay;
[BitField(1,4)]
public bool Throughput;
[BitField(1,5)]
public bool Reliability;
[BitField(1,6)]
public bool MonetaryCost;
}
enum PrecedenceLevel
{
Routine, Priority, Immediate, Flash, FlashOverride, CriticEcp,
InternetworkControl, NetworkControl
}
My class library can then convert an instance of this struct to and from its proper binary representation:
// Create an arbitrary DiffServe instance.
DiffServ ds = new DiffServ();
ds.Precedence = PrecedenceLevel.Immediate;
ds.Throughput = true;
ds.Reliability = true;
// Convert struct to value.
long dsValue = ds.Pack();
// Create struct from value.
DiffServ ds2 = Unpack<DiffServ>(0x66);
To accomplish this, my class library looks for fields/properties decorated with the BitFieldAttribute. Getting and setting members retrieves an object containing the boxed value type (int, bool, enum, etc.) Therefore, I need to unbox any value type and convert it to it's bare-bones binary representation so that the bits can be extracted and packed into a UInt64 value.
I'm going to post my best solution as fodder for the masses.
These conversions eliminate all exceptions (except for very large float, double, decimal values which do not fit in 64-bit integers) when unboxing an unknown simple value type held in object o:
long l = o is ulong ? (long)(ulong)o : Convert.ToInt64(o));
ulong u = o is ulong ? (ulong)o : (ulong)Convert.ToInt64(o));
Any improvements to this will be welcomed.

Why does this implicit conversion from int to uint work?

Using Casting null doesn't compile as inspiration, and from Eric Lippert's comment:
That demonstrates an interesting case. "uint x = (int)0;" would
succeed even though int is not implicitly convertible to uint.
We know this doesn't work, because object can't be assigned to string:
string x = (object)null;
But this does, although intuitively it shouldn't:
uint x = (int)0;
Why does the compiler allow this case, when int isn't implicitly convertible to uint?
Integer constant conversions are treated as very special by the C# language; here's section 6.1.9 of the specification:
A constant expression of type int can be converted to type sbyte, byte, short, ushort, uint, or ulong, provided the value of the constant-expression is within the range of the destination type. A constant expression of type long can be converted to type ulong, provided the value of the constant expression is not negative.
This permits you to do things like:
byte x = 64;
which would otherwise require an ugly explicit conversion:
byte x = (byte)64; // gross
The following code wil fail with the message "Cannot implicitly convert type 'int' to 'uint'. An explicit conversion exists (are you missing a cast?)"
int y = 0;
uint x = (int)y;
And this will fail with: "Constant value '-1' cannot be converted to a 'uint'"
uint x = (int)-1;
So the only reason uint x = (int)0; works is because the compiler sees that 0 (or any other value > 0) is a compile time constant that can be converted into a uint
In general compilers have 4 steps in which the code is converted.
Text is tokenized > Tokens are parsed > An AST is built + linking > the AST is converted to the target language.
The evaluation of constants such as numbers and strings occurs as a first step and the compiler probably treats 0 as a valid token and ignores the cast.

Difference between char in Java and C# (particular problem with (char)-1)

char x = (char)-1;
is valid in Java, but shows me the error (Overflow in constant value computation)
Should I use a different datatype in C#?
The error occurs because C# is smarter about literals ("constant computation"). In C# one could do...
int x = -1;
char c = (char)x;
int y = c;
// y is 0xffff, as per Java
However, do note that 0xFFFF is an invalid Unicode character :)
Happy coding.
Using unchecked will also "work":
unchecked {
char x = (char)-1;
}
Here's the definition of the C# char type; it's a 16-bit Unicode character, same as in Java. If you are just looking for a 16-byte signed value, then you might want a short.
For your reference, here is a list of the integral types available to you in C#.
You can also use an unchecked statement such as unchecked { char x = (char)-1; }; however, if it were me and I was, for instance, using -1 to represent an error value or some other marker, I would probably just use: char x = (char)0xFFFF; which gives you the same result, and a way of checking for an invalid value, without needing to circumvent the type check.

Categories

Resources