Char to Ascii int conversion in Go Language - c#

we have a project migration happening from C# .Net to Go language. I have completed most part of it but i am stuck at one place. In c#, i have a code,
(int)char < 31
How can i write this in Go language?

There is no "char" type in Go, the closest you can get is rune which is an alias for int32.
Being an alias of int32 means the types int32 and rune are identical and you can treat a rune like an int32 number (so you can compare it, add to it / subtract from it etc.).
But know that Go is strict about types, and you can't compare values of different types (in your answer you are comparing it with an untyped integer constant which is ok). For example the following code is a compile-time error:
var r rune = 'a'
var i int = 100
if r < i { // Compile-time error: invalid operation: r < i (mismatched types rune and int)
fmt.Println("less")
}
Should you need to convert a value of rune or any other integer type to another integer type (e.g. rune to int), you can use simple type conversion, e.g..
var r rune = 'a'
var i int = 100
if int(r) < i {
fmt.Println("less")
}
See related question: Equivalent of python's ord(), chr() in go?

I found answer my self with below change
var r rune
r = 'a' // char
r < 31
This worked for me

Related

Compare two integer objects for equality regardless of type

I'm wondering how you could compare two boxed integers (either can be signed or unsigned) to each other for equality.
For instance, take a look at this scenario:
// case #1
object int1 = (int)50505;
object int2 = (int)50505;
bool success12 = int1.Equals(int2); // this is true. (pass)
// case #2
int int3 = (int)50505;
ushort int4 = (ushort)50505;
bool success34 = int3.Equals(int4); // this is also true. (pass)
// case #3
object int5 = (int)50505;
object int6 = (ushort)50505;
bool success56 = int5.Equals(int6); // this is false. (fail)
I'm stumped on how to reliably compare boxed integer types this way. I won't know what they are until runtime, and I can't just cast them both to long, because one could be a ulong. I also can't just convert them both to ulong because one could be negative.
The best idea I could come up with is to just trial-and-error-cast until I can find a common type or can rule out that they're not equal, which isn't an ideal solution.
In case 2, you actually end up calling int.Equals(int), because ushort is implicitly convertible to int. This overload resolution is performed at compile-time. It's not available in case 3 because the compiler only knows the type of int5 and int6 as object, so it calls object.Equals(object)... and it's natural that object.Equals will return false if the types of the two objects are different.
You could use dynamic typing to perform the same sort of overload resolution at execution time - but you'd still have a problem if you tried something like:
dynamic x = 10;
dynamic y = (long) 10;
Console.WriteLine(x.Equals(y)); // False
Here there's no overload that will handle long, so it will call the normal object.Equals.
One option is to convert the values to decimal:
object x = (int) 10;
object y = (long) 10;
decimal xd = Convert.ToDecimal(x);
decimal yd = Convert.ToDecimal(y);
Console.WriteLine(xd == yd);
This will handle comparing ulong with long as well.
I've chosen decimal as it can exactly represent every value of every primitive integer type.
Integer is a value type. When you compare two integers types, compiller checks their values.
Object is a reference type. When you compare two objects, compiller checks their references.
The interesting part is here:
object int5 = (int)50505;
Compiller perfoms boxing operation, wraps value type into reference type, and Equals will compare references, not values.

Why Unboxing Object to long shows InvalidCastException?

I want to know why following shows an InvalidCastException:
Object obj = 9;
long num = (long)obj; //InvalidCastException
After searching on net I find out Object considers 9 as Int so long doesn't exactly match Int.
My question is why Object considers 9 as Int but not short or long?
Because 9 is an Int32 literal. To specify an Int64 literal use
Object obj = 9L;
long num = (long)obj;
You can actually make this work if you explicitly say that it's a long. Pure numbers are read as integers, unless there are decimal points.
Object obj = 9L;
long num = (long)obj;
The following will also result in an invalid cast exception:
Object obj = 9L;
int num = (int)obj; //InvalidCastException
int is the default data type for non-decimal numeric literals, just as double is the default for decimal numeric literals. You can force numeric literals to other types with appropriate suffixes. You can use suffixes for int and double too but pretty much noone ever does.

C# - var not behaving as expected in for loop

I found a situation where using "var" caused unexpected results.
In the code below I was expecting X to be declared as datatype "long".
Why does X get declared as datatype "int" ? (which causes an infinite loop in this case)
long maxNumber = (long)int.MaxValue + 1;
long count = 0;
for (var X = 0; X < maxNumber; X++)
{
count++;
}
And why did you expect
var X = 0;
to infer datatype long?
Type inference for var variables looks at the type of the initial value, only. It doesn't consider usage.
Others are telling you how to control the type of 0, with a suffix. I say, if you want a particular type, go ahead and write
long X = 0;
This isn't really the sweet spot for var. Type inference is mainly for types which are hard to name (IEnumerable<KeyValuePair<string, Converter<TreeViewNode, IEnumerable<TreeViewNode>>>> anyone?) or can't be named at all, in the case of anonymous types, or if you want the type to automatically change to match the return type of some other function. Integral loop counters just don't benefit.
var X = 0
This is the line that declares X's type, regardless of how it's later used. When you specify a numerical literal without any suffixes, it will be an integer. Here's one possible solution.
var X = 0L

Difference between char in Java and C# (particular problem with (char)-1)

char x = (char)-1;
is valid in Java, but shows me the error (Overflow in constant value computation)
Should I use a different datatype in C#?
The error occurs because C# is smarter about literals ("constant computation"). In C# one could do...
int x = -1;
char c = (char)x;
int y = c;
// y is 0xffff, as per Java
However, do note that 0xFFFF is an invalid Unicode character :)
Happy coding.
Using unchecked will also "work":
unchecked {
char x = (char)-1;
}
Here's the definition of the C# char type; it's a 16-bit Unicode character, same as in Java. If you are just looking for a 16-byte signed value, then you might want a short.
For your reference, here is a list of the integral types available to you in C#.
You can also use an unchecked statement such as unchecked { char x = (char)-1; }; however, if it were me and I was, for instance, using -1 to represent an error value or some other marker, I would probably just use: char x = (char)0xFFFF; which gives you the same result, and a way of checking for an invalid value, without needing to circumvent the type check.

If compiler can perform implicit narrowing conversion on an integer literal, then it should also

If compiler is able to implicitly convert integer literal into byte type and assign the result to b ( b = 100; ), why can’t it also implicitly assign the result of an expression a+100 ( result is of type integer ) to b?
byte a = 10;
byte b = a; //ok
b = 100; //ok
b = a + 100;//error - explicit cast needed
b = (byte)(a + 100); // ok
thanx
It's all about static type safety - whether, at compile time, we can safety know the type of an expression. With a literal, the compiler can correctly tell that if it can be converted to a byte. In byte a = 20, 20 is convertible, so it all goes through fine. byte a = 257 won't work (257 can't be converted).
In the case byte b = a, then we already know a is a byte, so type safety is assured. b = 100 is again fine (it's statically known that 100 is convertible).
In b = a + 100, it is not statically known if a + 100 is a byte. a could contain 200, so a + 100 is not representable as a byte. Hence the compiler forces you to tell it "Yes, a + 100 is always a byte" via a cast, by appealing to your higher level programmer knowledge.
Some types of more advanced type systems don't suffer from this problem, but come with their own problems that most programmers won't like.
The compiler allows you to implicitly convert an integer literal into a byte, since it can, at compile time, check the value of the literal to make sure that it's a byte, and treat it as a byte literal.
You can see this if you try the following:
byte a = 10; // Works, since 10 is valid as byte
byte b = 239832; // Gives error!
The error you get if you put an arbitrary int is:
Error 1 Constant value '239832' cannot be converted to a 'byte'
When you're adding a literal to a byte:
b = a + 100
There's the potential for overflowing, so it's not implicitly allowed. You need to tell the compiler that you explicitly want this to happen, via a cast.
If you use the assigning version of the operator (+=) then it will perform the narrowing conversion on the result without reporting an error:
byte a = 10;
byte b = a; //ok
b = 100; //ok
b = a;
b += 100;//ok
Because treating literals specially is easy and useful; having the compiler distinguish all expressions consisting of compile-time constants and treat them specially would be far more work and far less useful.

Categories

Resources