char x = (char)-1;
is valid in Java, but shows me the error (Overflow in constant value computation)
Should I use a different datatype in C#?
The error occurs because C# is smarter about literals ("constant computation"). In C# one could do...
int x = -1;
char c = (char)x;
int y = c;
// y is 0xffff, as per Java
However, do note that 0xFFFF is an invalid Unicode character :)
Happy coding.
Using unchecked will also "work":
unchecked {
char x = (char)-1;
}
Here's the definition of the C# char type; it's a 16-bit Unicode character, same as in Java. If you are just looking for a 16-byte signed value, then you might want a short.
For your reference, here is a list of the integral types available to you in C#.
You can also use an unchecked statement such as unchecked { char x = (char)-1; }; however, if it were me and I was, for instance, using -1 to represent an error value or some other marker, I would probably just use: char x = (char)0xFFFF; which gives you the same result, and a way of checking for an invalid value, without needing to circumvent the type check.
Related
we have a project migration happening from C# .Net to Go language. I have completed most part of it but i am stuck at one place. In c#, i have a code,
(int)char < 31
How can i write this in Go language?
There is no "char" type in Go, the closest you can get is rune which is an alias for int32.
Being an alias of int32 means the types int32 and rune are identical and you can treat a rune like an int32 number (so you can compare it, add to it / subtract from it etc.).
But know that Go is strict about types, and you can't compare values of different types (in your answer you are comparing it with an untyped integer constant which is ok). For example the following code is a compile-time error:
var r rune = 'a'
var i int = 100
if r < i { // Compile-time error: invalid operation: r < i (mismatched types rune and int)
fmt.Println("less")
}
Should you need to convert a value of rune or any other integer type to another integer type (e.g. rune to int), you can use simple type conversion, e.g..
var r rune = 'a'
var i int = 100
if int(r) < i {
fmt.Println("less")
}
See related question: Equivalent of python's ord(), chr() in go?
I found answer my self with below change
var r rune
r = 'a' // char
r < 31
This worked for me
I want to use HEX number to assign a value to an int:
int i = 0xFFFFFFFF; // effectively, set i to -1
Understandably, compiler complains.
Question, how do I make above work?
Here is why I need this.
WritableBitmap class exposes pixels array as int[]. So if I want to set pixel to Blue, I would say: 0xFF0000FF (ARGB) (-16776961)
Plus I am curious if there is an elegant, compile time solution.
I know there is a:
int i = BitConverter.ToInt32(new byte[] { 0xFF, 0x00, 0x00, 0xFF }, 0);
but it is neither elegant, nor compile time.
Give someone a fish and you feed them for a day. Teach them to pay attention to compiler error messages and they don't have to ask questions on the internet that are answered by the error message.
int i = 0xFFFFFFFF;
produces:
Cannot implicitly convert type 'uint' to 'int'. An explicit conversion exists
(are you missing a cast?)
Pay attention to the error message and try adding a cast:
int i = (int)0xFFFFFFFF;
Now the error is:
Constant value '4294967295' cannot be converted to a 'int'
(use 'unchecked' syntax to override)
Again, pay attention to the error message. Use the unchecked syntax.
int i = unchecked((int)0xFFFFFFFF);
Or
unchecked
{
int i = (int)0xFFFFFFFF;
}
And now, no error.
As an alternative to using the unchecked syntax, you could specify /checked- on the compiler switches, if you like to live dangerously.
Bonus question:
What makes the literal a uint in the first place?
The type of an integer literal does not depend on whether it is hex or decimal. Rather:
If a decimal literal has the U or L suffixes then it is uint, long or ulong, depending on what combination of suffixes you choose.
If it does not have a suffix then we take the value of the literal and see if it fits into the range of an int, uint, long or ulong. Whichever one matches first on that list is the type of the expression.
In this case the hex literal has a value that is outside the range of int but inside the range of uint, so it is treated as a uint.
You just need an unchecked cast:
unchecked
{
int i = (int)0xFFFFFFFF;
Console.WriteLine("here it is: {0}", i);
}
The unchecked syntax seems a bit gar'ish (!) when compared to the various single-letter numerical suffixes available.
So I tried for a shellfish:
static public class IntExtMethods
{
public static int ui(this uint a)
{
return unchecked((int)a);
}
}
Then
int i = 0xFFFFFFFF.ui();
Because the lake has more fish.
Note: it's not a constant expression, so it can't be used to initialize an enum field for instance.
I want to know why following shows an InvalidCastException:
Object obj = 9;
long num = (long)obj; //InvalidCastException
After searching on net I find out Object considers 9 as Int so long doesn't exactly match Int.
My question is why Object considers 9 as Int but not short or long?
Because 9 is an Int32 literal. To specify an Int64 literal use
Object obj = 9L;
long num = (long)obj;
You can actually make this work if you explicitly say that it's a long. Pure numbers are read as integers, unless there are decimal points.
Object obj = 9L;
long num = (long)obj;
The following will also result in an invalid cast exception:
Object obj = 9L;
int num = (int)obj; //InvalidCastException
int is the default data type for non-decimal numeric literals, just as double is the default for decimal numeric literals. You can force numeric literals to other types with appropriate suffixes. You can use suffixes for int and double too but pretty much noone ever does.
In C#
I want to Check the value which is in 8 bit Binary format (i.e. 0000000a or 00010ef0) is between the specific range....
for example
(following is C language code )
int temp=5;
if (temp>=0 || temp<10)
printf("temp is between 0-10");
same way i want to check Hexadecimal value is in Given rage or not ....
int temp=0x5; followed by if (temp >= 0xa || temp < 0x10ef0) looks like what you want.
That || means OR however, you probably want AND (&&).
Basically, prefix with 0x to tell the compiler you're specifying something in hex.
You may use int family types:
int val=0x19;
You can convert the string of the hex value to an integer
int hexvalue = int.Parse(value.ToString(),System.Globalization.NumberStyles.HexNumber);
and then do your normal test
if (hexvalue >=0 || hexvalue <10)
Console.WriteLine("hexvalue is between 0-10");
Except for the "printf" the code you post in C compile with the same effects in C#:
int temp=5;
if (temp>=0 || temp<10)
Console.WriteLine("temp is between 0-10");
if you need to represent in C# an hexadecimal constant, prefix it with 0x, as you are used to do in C.
When i write a for loop as below it works fine.
for (Char ch = 'A'; ch < 'Z'; ch++)
{
Console.WriteLine(ch.ToString());
}
I thought that compiler converts Char type to int but when i looked at the decompiled code, this is what i saw:
for (char i = 65; i < 97; i = (ushort)i + 1)
{
Console.WriteLine(i.ToString());
}
Can someone please explain why the compiler did not change the datatype of i from non-numeric to numeric?
--EDIT-- Added decompiler screenshot
To answer the question title,
Why is char being converted into a ushort instead of an int in this decompilation?
Chars are 16-bit, and so are unsigned shorts. There simply isn't a need to convert to any larger-ranged type. The decompiler you used was probably working based on that.
To answer your edited question,
Can someone please explain why the compiler did not change the datatype of i from non-numeric to numeric?
It's precisely because chars, while they have corresponding numeric character codes, aren't themselves the same as numeric types. You can cast an integer to a char, but they're not the same thing. Consequently, ((char) 65).ToString() is not the same as ((int) 65).ToString().
For the record, .NET Reflector 7 decompiles your code to this:
for (char ch = 'A'; ch < 'Z'; ch = (char) (ch + '\x0001'))
{
Console.WriteLine(ch.ToString());
}
No sign of any integers anywhere according to Reflector. The code is almost identical to what you originally wrote.
If you want to look at what's really happening, look at the IL.
A char can be implicitly converted to ushort, int, uint, long, ulong, float, double, or decimal