I want to declare -1 literal using the new binary literal feature:
int x = 0b1111_1111_1111_1111_1111_1111_1111_1111;
Console.WriteLine(x);
However, this doesn't work because C# considers this as a uint literal and we get Cannot implicitly convert type 'uint' to 'int'... which is a bit strange for me since we deal with binary data.
Is there a way to declare -1 integer value using binary literal in C#?
After trying some cases, I finally found out this one
int x = -0b000_0000_0000_0000_0000_0000_0000_0001;
Console.WriteLine(x);
And as result is printed -1.
If I understand everything correct they use sing flag for -/+ so when you put 32 1 you go into uint
You can explicitly cast it, but because there's a constant term involved, I believe you have to manually specify unchecked:
int x = unchecked((int)0b1111_1111_1111_1111_1111_1111_1111_1111);
(Edited to include Jeff Mercado's suggestion.)
You can also use something like int x = -0b1 as pointed out in S.Petrosov's answer, but of course that doesn't show the actual bit representation of -1, which might defeat the purpose of declaring it using a binary literal in the first place.
Related
I have a string that contains numbers, like so:
string keyCode = "1200009990000000000990";
Now I want to get the number on position 2 as integer, which I did like so:
int number = Convert.ToInt32(keyCode[1]);
But instead of getting 2, I get 50.
What am I doing wrong?
50 is the ascii code for char '2'. 0->48, 1->49 etc....
You can do
int number = keyCode[1]-'0';
You observed that when you do int n = Convert.ToInt32( '2' ); you get 50. That's correct.
Apparently, you did not expect to get 50, you expected to get 2. That's what's not correct to expect.
It is not correct to expect that Convert.ToInt32( '2' ) will give you 2, because then what would you expect if you did Convert.ToInt32( 'A' ) ?
A character is not a number. But of course, inside the computer, everything is represented as a number. So, there is a mapping that tells us what number to use to represent each character. Unicode is such a mapping, ASCII is another mapping that you may have heard of. These mappings stipulate that the character '2' corresponds to the number 50, just as they stipulate that the character 'A' corresponds to the number 65.
Convert.ToInt32( char c ) performs a very rudimentary conversion, it essentially reinterprets the character as a number, so it allows you to see what number the character corresponds to. But if from '2' you want to get 2, that's not the conversion you want.
Instead, you want a more complex conversion, which is the following: int n = Int32.Parse( keyCode.SubString( 1, 1 ) );
Well, You got 50 because it is the ascii code of 2.
You are getting it because you are pointing to a char and when c# converts a char to an int it gives back its ascii code. You should use instead int.Parse which takes a string.
int.Parse(keyCode[1].ToString());
or
int.Parse(keyCode.Substring(1,1));
You need
int number = (int)Char.GetNumericValue(keyCode[1]);
The cast is needed because Char.GetNumericValue returns a double.
As Ofir said, another method is int number = int.Parse(keyCode[1].ToString()).
This command can be explained like this: int is shorthand for Int32, which contains a Parse(string) method. Parse only works if the input is a string that contains only numbers, so it's not always the best to use unless TryParse (the method that checks whether you can parse a string or not) has been invoked and returns true, but in this case, since your input string is always a number, you can use Parse without using TryParse first. keyCode[1] actually implicitly converts keyCode to a char[] first so that it can retrieve a specific index, which is why you need to invoke ToString() on it before you can parse it.
This is my personal favorite way to convert a string or char to an int, since it's pretty easy to make sense of once you understand the conversions that are performed both explicitly and implicitly. If the string to convert isn't static, it may be better to use a different method, since an if statement checking whether it can be parsed or a try makes it take longer to code and execute than one of the other solutions.
I am doing: -
Decimal production = 0;
Decimal expense = 5000;
Decimal.ToUInt64(production - expense);
But it throws exception with the following error message.
"Value was either too large or too small for a UInt64"
Can someone give me a workaround for this.
Thanks!
Edit
In any case, I want the result as a positive number.
Problem: -5000m is a negative number, which is outside the range of UInt64 (an unsigned type).
Solution: use Int64 instead of UInt64 if you want to cope with negative numbers.
Note that you can just cast instead of calling Decimal.To...:
long x = (long) (production - expense);
Alternative: validate that the number is non-negative before trying to convert it, and deal with it however you deem appropriate.
Very dodgy alternative: if you really just want the absolute value (which seems unlikely) you could use Math.Abs:
UInt64 alwaysNonNegative = Decimal.ToUInt64(Math.Abs(production - expense));
0 - 5000 will return -5000. And you are trying to convert to an unsigned int which can not take negative values.
Try changing it to signed int
Decimal.ToInt64(production - expense);
UInt can not store negative numbers. The result of your calculation is negative. That's why the error comes. Check the sign before using ToUInt64 and correct it via *-1 or use a signed Int64.
use
var result = Decimal.ToInt64(production - expense);
Consider this
int i = 2147483647;
var n = i + 3;
i = n;
Console.WriteLine(i); // prints -2147483646 (1)
Console.WriteLine(n); // prints -2147483646 (2)
Console.WriteLine(n.GetType()); // prints System.Int32 (3)
I am confused with following
(1) how could int hold the value -2147483646 ? (int range = -2,147,483,648 to 2,147,483,647)
(2) why does this print -2147483648 but not 2147483648 (compiler should
decide better type as int range
exceeds)
(3) if it is converted somewhere, why n.GetType() gives System.Int32
?
Edit1: Made the correction: Now you will get What I am Getting. (sorry for that)
var n = i + 1; to
var n = i + 3;
Edit2: One more thing, if it as overflow, why is an exception not raised ?
Addition: as the overflow occurs, is it not right to set the type for
var n
in statement var n = i + 3; to another type accordingly ?
you are welcome to suggest a better title, as this is not making sense to.... me at least
Thanks
Update: Poster fixed his question.
1) This is output is expected because you added 3 to int.MaxValue causing an overflow. In .NET by default this is a legal operation in unchecked code giving a wrap-around to negative values, but if you add a checked block around the code it will throw an OverflowException instead.
2) The type of a variable declared with var is determined at compile time not runtime. It's a rule that adding two Int32s gives an Int32, not a UInt32, an Int64 or something else. So even though at runtime you can see that the result is too big for an Int32, it still has to return an Int32.
3) It's not converted to another type.
1) -2147483646 is bigger than -2,147,483,648
2) 2147483648 is out of range
3) int is an alias for Int32
1)
First of all, the value in the variable is not -2147483646, it's -2147483648. Run your test again and check the result.
There is no reason that an int could not hold the value -2147483646. It's within the range -2147483648..2147483647.
2)
The compiler chooses the data type of the variable to be the type of the result of the expression. The expression returns an int value, and even if the compiler would choose a larger data type for the variable, the expression still returns an int and you get the same value as result.
It's the operation in the expression that overflows, it's not when the result is assigned to the variable that it overflows.
3)
It's not converted anywhere.
This is an overflow, your number wrapped around and went negative
This isn't the compiler's job, as a loop at runtime can cause the same thing
int is an alias or System.Int32 they are equivalent in .Net.
This is because of the bit representation
you use Int32 but the same goes for char (8 bits)
the first bit holds the sign, then the following bits hold the number
so with 7 bits you can represent 128 numbers 0111 1111
when you try the 129th, 1000 0001, the sign bits get set so the computer thinks its -1 instead
Arithmic operations in .NET don't change the actual type.
You start off with an (32bit) integer and the +3 isn't going to change that.
That's also why you get an unexpected round number when you do this:
int a = 2147483647;
double b = a / 4;
or
int a = 2147483647;
var b = a / 4;
for that matter.
EDIT:
There is no exception because .NET overflows the number.
The overflow exception will only occur at assignment operations or as Mark explains when you set the conditions to generate the exception.
If you want an exception to be thrown, write
abc = checked(i+3)
instead. That will check for overflows.
Also, in c#, the default setting is to not throw exceptions on overflows. But you can switch that option somewhere on your project's properties.
You could make this easier on us all by using hex notation.
Not everyone knows that the eighth Mersenne prime is 0x7FFFFFFF
Just sayin'
I'm a Java programmer trying to migrate to C#, and this gotcha has me slightly stumped:
int a = 1;
a = 0x08000000 | a;
a = 0x80000000 | a;
The first line compiles just fine. The second does not. It seems to recognise that there is a constant with a sign bit, and for some reason it decides to cast the result to a long, resulting in the error:
Cannot implicitly convert type 'long' to 'int'.
An explicit conversion exists (are you missing a cast?)
The fix I have so far is:
a = (int)(0x80000000 | a);
Which deals with the cast but still leaves a warning:
Bitwise-or operator used on a sign-extended operand;
consider casting to a smaller unsigned type first
What would be the correct C# way to express this in an error/warning/long-free way?
I find it interesting that in all these answers, only one person actually suggested doing what the warning says. The warning is telling you how to fix the problem; pay attention to it.
Bitwise-or operator used on a sign-extended operand; consider casting to a smaller unsigned type first
The bitwise or operator is being used on a sign-extended operand: the int. That's causing the result to be converted to a larger type: long. An unsigned type smaller than long is uint. So do what the warning says; cast the sign-extended operand -- the int -- to uint:
result = (int)(0x80000000 | (uint) operand);
Now there is no sign extension.
Of course this just raises the larger question: why are you treating a signed integer as a bitfield in the first place? This seems like a dangerous thing to do.
A numeric integer literal is an int by default, unless the number is too large to fit in an int and it becomes an uint instead (and so on for long and ulong).
As the value 0x80000000 is too large to fit in an int, it's an uint value. When you use the | operator on an int and an uint both are extended to long as neither can be safely converted to the other.
The value can be represented as an int, but then you have to ignore that it becomes a negative value. The compiler won't do that silently, so you have to instruct it to make the value an int without caring about the overflow:
a = unchecked((int)0x80000000) | a;
(Note: This only instructs the compiler how to convert the value, so there is no code created for doing the conversion to int.)
Your issue is because 0x80000000 is a minus in int form and you cannot perform bitwise operations on minus values.
It should work fine if you use a uint.
a = ((uint)0x80000000) | a; //assuming a is a uint
Changing that line to
(int)((uint)0x80000000 | (uint)a);
does it for me.
The problem you have here is that
0x80000000 is an unsigned integer literal. The specification says that an integer literal is of the first type in the list (int, uint, long, ulong) which can hold the literal. In this case it is uint.
a is probably an int
This causes the result to be a long. I don't see a nicer way other than cast the result back to int, unless you know that a can't be negative. Then you can cast a to uint or declare it that way in the first place.
You can make it less ugly by creating a constant that starts with the letter 'O'. (And unlike on this website, in visual studio, it doesn't show it in a different color).
const int Ox80000000 = unchecked((int)0x80000000);
/// ...
const int every /* */ = Ox80000000;
const int thing /* */ = 0x40000000;
const int lines /* */ = 0x20000000;
const int up /* */ = 0x10000000;
bitNot = (sbyte)(~bitNot)
VS.
myInt = Int32.Parse(myInput);
Hello, I'm a bit confused on the above two statements... it seems like both statements are trying to convert, but why is the syntax bitNot = (sbyte)(~bitNot) for the first statement?
why can't we use bitNot = sbyte.Parse(~bitNot) like the syntax we have on the second statement? Thanks
The first statement takes bitNot, which is presumably some form of integer, inverts all the bits, cast it into an sbyte, and stores the result back in bitNot
The second statement takes myInput, which is most likely a string of some sort, parses it from a human-readable form into an Int32 type, and stores that in myInt.
The major difference is the types you are operating on; you only need Parse if you are dealing with strings. In first statement, a cast operation is being done instead; this usually means converting from a 32-bit integer to a 8-bit integer, for example. It is a very different kind of operation.