I need to perform a bitwise '&' operation on a value of type uint.
enum MsgType : ulong
{
Begin = 0x00000001,
}
unit number= 0x00000002;
if (number & MsgType.Begin == MsgType.Begin)
//Not working
It is giving an error:
Operator '&' cannot be applied on operands of type 'uint' or 'bool'
How to cast it?
it seems it is an issue of operator precedence in C#, use parentheses to produce the correct compiler output
i.e instead of this:
if (number & MsgType.Begin == MsgType.Begin)
do this:
if ((number & MsgType.Begin) == MsgType.Begin)
as per this related question
Related
I am trying to use string and byte in the same condition but getting an error
operator & cannot be applied to operands of type bool and byte
string m = "";
string mi= "";
byte Active = 0;
if ((m== mi) & Active)
{
}
The unary operator & can be performed over a boolean.
In your case the left operand is a boolean , while the right operand needs to be an expression that returns also a boolean:
if( (m==mi) & expression_that_returns_boolean ){
}
System.byte is a keyword that is used to declare a variable which can store an unsigned value range from 0 to 255.
Thus your if statement effectively becomes :
if (boolean & byte )
operator & cannot be applied to operands of type bool and byte
Therefore the right operand of your & operator should be a expression whose result is a boolean
In your case you should create an expression that applied over your byte returns a boolean:
byte Active=0;
if( (m==mi) & expression(Active) )
The expression could be :
Active==[some_integer]
Resulting:
if( (m==mi) & Active==0 )
I wrote the following code that uses a bitField variable as a map for characters to check their presence in a given string:
public static bool AllElementsUniqueInASCIIString(string str) {
int bitField = 0;
foreach (int character in str) {
int idx = character - 'a';
int mask = 1 << idx;
if (bitField | mask == 1)
return false;
else
bitField &= mask;
}
return true;
}
The problem is, on the expression if (bitField | mask == 1), the following error is thrown at compile time:
CS0029: Cannot implicitly convert type 'int' to 'bool'
However, when I replace this line with the following snippet, the code compiles just fine:
int present = bitField | mask;
if (present == 1)
Both bitField and mask are integer, so what is the matter of this compile error? What syntax should I use to check this condition in one line?
You will need brackets to explicitly express your intentions
if ((bitField | mask) == 1)
The compiler thinks you are wanting to do the following (due to operator precedence)
if (bitField | (mask == 1))
int | bool
Which is what the error is telling you
CS0019 Operator '|' cannot be applied to operands of type 'int' and
'bool'
In short, equality operators have precedence over Boolean logical OR or bitwise logical OR operators
Additional Resources
Operator precedence
In an expression with multiple operators, the operators with higher
precedence are evaluated before the operators with lower precedence
Given the following C++ Code:
void Check(DWORD64 ptr)
{
if ( ! (ptr & 0x8000000000000000) )
return;
}
In C# this results in the following Error:
CS0023 Operator '!' cannot be applied to operand of type 'ulong'
How do I bitwise check the ptr parameter in C#?
Comparing to not 0?
void Check(ulong ptr)
{
if ((ptr & 0x8000000000000000) != 0)
return;
}
or checking for 0?
void Check(ulong ptr)
{
if ((ptr & 0x8000000000000000) == 0)
return;
}
Googleing for this questions leads to all sorts of answers on different bitwise operations but I couldn't find an answer for this specific negation with exclemation mark.
When operator ! "logical not" is applied to a numeric value in C or C++, it produces a result as follows:
1 if its operand is zero
0 otherwise
C's conditional statement if interprets 0 as "false" and any non-zero value, including 1, as "true". Therefore, your second option is correct.
The second one. C++ would treat any nonzero value as true, so the ! check on an integer was effectively equivalent to !(value != 0) or (value==0)
Here is part of my code
ushort code = ...;
...
code <<= 1;
code |= (NextBit(ref isEndOfScan) << 0); //ERROR
bool NextBit(ref bool isEndOfScan) returning bool
I am rewriting my code from c++ to c#.
I've tried to convert function result to int or write false instead of 0, but nothing helped.
I want to set 0 bit of variable code.
C++ allows some conversions that C# doesn't - particularly around Boolean values.
In this case, you can just use the conditional operator to treat the return value as 1 or 0:
code |= NextBit(ref isEndOfScan) ? 1 : 0;
Using Casting null doesn't compile as inspiration, and from Eric Lippert's comment:
That demonstrates an interesting case. "uint x = (int)0;" would
succeed even though int is not implicitly convertible to uint.
We know this doesn't work, because object can't be assigned to string:
string x = (object)null;
But this does, although intuitively it shouldn't:
uint x = (int)0;
Why does the compiler allow this case, when int isn't implicitly convertible to uint?
Integer constant conversions are treated as very special by the C# language; here's section 6.1.9 of the specification:
A constant expression of type int can be converted to type sbyte, byte, short, ushort, uint, or ulong, provided the value of the constant-expression is within the range of the destination type. A constant expression of type long can be converted to type ulong, provided the value of the constant expression is not negative.
This permits you to do things like:
byte x = 64;
which would otherwise require an ugly explicit conversion:
byte x = (byte)64; // gross
The following code wil fail with the message "Cannot implicitly convert type 'int' to 'uint'. An explicit conversion exists (are you missing a cast?)"
int y = 0;
uint x = (int)y;
And this will fail with: "Constant value '-1' cannot be converted to a 'uint'"
uint x = (int)-1;
So the only reason uint x = (int)0; works is because the compiler sees that 0 (or any other value > 0) is a compile time constant that can be converted into a uint
In general compilers have 4 steps in which the code is converted.
Text is tokenized > Tokens are parsed > An AST is built + linking > the AST is converted to the target language.
The evaluation of constants such as numbers and strings occurs as a first step and the compiler probably treats 0 as a valid token and ignores the cast.