Is there a way to emulate special C datatypes, like uint64 (_int64), anyID (_int16) in C#?
I defined the special datatypes in C like this:
typedef unsigned _int16 anyID;
typedef unsigned _int64 uint64;
Its for using the TS3 Plugin API.
It has to be C# though, and I just want to use the from TS3 defined C datatypes in C#.
The equivalent of a typedef is using:
using anyID = System.UInt16;
using uint64 = System.UInt64;
The sizes of the different numeric types in C# can be found here: Integral Types Table.
One thing to note: the sizes of the different numeric types are fixed in C#, unlike in C where they are platform-dependent, so it's usually redundant to define aliases for numeric type sizes like int64.
unsigned ints are already predefined See MS c# types
in short: ushort is an unsigned 16bit int, and ulong is an unsigned 64bit int..
For unsigned integers you have ushort, uint and ulong, which is the equivalent to unsigned int16, unsigned int32 and unsigned int64 respectively.
Related
How do I pass a UTF-16 char from a C++/CLI function to a .NET function? What types do I use on the C++/CLI side and how do I convert it?
I've currently defined the C++/CLI function as follows:
wchar_t GetCurrentTrackID(); // 'wchar_t' is the C++ unicode char equivalent to .NET's 'char'?
The .NET wrapper is defined as:
System::Char GetCurrentTrackID(); // here, 'char' means UTF-16 char
I'm currently using this to convert it, but when testing it I only get a null character. How do I properly convert a unicode char code to its char equivalent for .NET?
#pragma managed
return (System::Char)player->GetCurrentTrackID();
They are directly compatible. You can assign a Char to a wchar_t and the other way around without a cast, the compiler will not emit any kind of conversion function call. This is true for many simple value types in C++/CLI, like Boolean vs bool, SByte vs char, Byte vs unsigned char, Int16 vs short, Int32 vs int or long, Int64 vs long long, Single vs float, Double vs double. Plus their unsigned varieties. The compiler will treat them as aliases since they have the exact same binary representation.
But not strings or arrays, they are classes with a non-trivial implementation that doesn't match their native versions at all.
Following is the c# code:
static void Main(string[] args)
{
uint y = 12;
int x = -2;
if (x > y)
Console.WriteLine("x is greater");
else
Console.WriteLine("y is greater");
}
and this is c++ code:
int _tmain(int argc, _TCHAR* argv[])
{
unsigned int y = 12;
int x = -2;
if(x>y)
printf("x is greater");
else
printf("y is greater");
return 0;
}
Both are giving different result. Am I missing something basic? Any idea?
C++ and C# are different languages. They have different rules for handling type promotion in the event of comparisons.
In C++ and C, they're usually compared as if they were both unsigned. This is called "unsigned preserving". C++ and C compilers traditionally use "unsigned preserving" and the use of this is specified in the C++ standard and in K&R.
In C#, they're both converted to signed longs and then compared. This is called "value preserving". C# specifies value preserving.
ANSI C also specifies value preserving, but only when dealing with shorts and chars. Shorts and chars (signed and unsigned) are upconverted to ints in a value-preserving manner and then compared. So if an unsigned short were compared to a signed short, the result would come out like the C# example. Any time a conversion to a larger size is done, it's done in a value-preserving manner, but if the two variables are the same size (and not shorts or chars) and either one is unsigned, then they get compared as unsigned quantities in ANSI C. There's a good discussion of the up and down sides of both approaches in the comp.lang.c FAQ.
In C++, when you compare an unsigned int and a signed int, the signed int is converted to unsigned int. Converting a negative signed int to an unsigned int is done by adding UINT_MAX + 1, which is larger than 12 and hence the result.
In C#, if you are getting the opposite result then it means that in C# both the expressions are being converted to signed int signed long (long or System.Int64)1 and then compared.
In C++, your compiler must have given you the warning:
warning: comparison between signed and unsigned integer expressions
Rule:
Always take warnings emitted by the compiler seriously!
1 As rightly pointed out by svick in comments.
I don't know about the standard of C#, but in the C++ standard, usual arithmetic conversions would be applied to both operands of relational operators:
[......enum, floating point type involed......]
— Otherwise, the integral promotions (4.5) shall be performed on both operands.
Then the following rules shall be applied to the promoted operands:
— If both operands have the same type, no further conversion is needed.
— Otherwise, if both operands have signed integer types or both have
unsigned integer types, the operand with the type of lesser integer
conversion rank shall be converted to the type of the operand with
greater rank.
— Otherwise, if the operand that has unsigned integer type has rank
greater than or equal to the rank of the type of the other operand, the
operand with signed integer type shall be converted to the type of the
operand with unsigned integer type.
— Otherwise, if the type of the operand with signed integer type can
represent all of the values of the type of the operand with unsigned
integer type, the operand with unsigned integer type shall be converted
to the type of the operand with signed integer type.
— Otherwise, both operands shall be converted to the unsigned integer type
corresponding to the type of the operand with signed integer type.
Thus, when unsigned int is compared with int, int would be converted to unsigned int, and -2 would become a very large number when converted to unsigned int.
Given the code below:
static void Main()
{
Console.WriteLine(typeof(MyEnum).BaseType.FullName);
}
enum MyEnum : ushort
{
One = 1,
Two = 2
}
It outputs System.Enum, which means the colon here has nothing to do with inheritance, and it just specifies the basic type of the enum, am I right?
But if I change my code as follows:
enum MyEnum : UInt16
{
One = 1,
Two = 2
}
I would get a compilation error. Why? Aren't UInt16 and ushort the same?
You are correct that reflection doesn't report that an enum inherits the base type, which the specification calls the "underlying type". You can find it using Enum.GetUnderlyingType instead.
The type named by ushort and System.UInt16 are precisely the same.
However, the syntax of enum does not call for a type. Instead it calls for one of a limited set of keywords, which control the underlying type. While System.UInt16 is a valid underlying type, it is not one of the keywords which the C# grammar permits to appear in that location.
Quoting the grammar:
enum-declaration:
attributesopt enum-modifiersopt enum identifier enum-baseopt enum-body ;opt
enum-base:
: integral-type
integral-type:
sbyte
byte
short
ushort
int
uint
long
ulong
char
Because the valid types for an enum are explicitly specified to be the integral types (except char).
The approved types for an enum are byte, sbyte, short, ushort, int, uint, long, or ulong.
http://msdn.microsoft.com/en-us/library/sbbt4032.aspx
One would expect the UInt16 to be equivalent to a ushort given the documentation for built in types:
The C# type keywords and their aliases are interchangeable. For example, you can declare an integer variable by using either of the following declarations...
http://msdn.microsoft.com/en-us/library/ya5y69ds.aspx
Edit: I had messed around with this answer a few times not quite grasping the correct answer. #BenVoight is correct. The accepted list are the integral types (other than char) The System.UInt16 is exactly the same type as ushort, but it is not an integral type identifier (merely a struct type) as specified by the grammar.
That's compiler error CS1008, and it pretty much provides the answer. The approved types for an enum:
The approved types for an enum are byte, sbyte, short, ushort, int,
uint, long, or ulong.
The first part of your question is answered by others, but no one has addressed the 2nd part yet. Someone other than the OP has since edited the 2nd question, my answer may no longer apply
UInt16 and UInt are not the same, UInt16 is an unsigned 16 bit integer, UInt is an unsigned 32 bit integer. They vary quite a bit in their maximum value.
Just for completeness, I'm including the answer to the. first question also:
The approved types for an enum are byte, sbyte, short, ushort, int, uint, long, or ulong.
As for why?
My guess is CLS compliance.
The question is easy! How do you represent a 64 bit int in C#?
64 bit int is long
System.Int64 is the .net type, in C# it's also called long
A signed 64 bit integer is long, an unsigned is ulong.
The corresponding types in the framwwork are System.Int64 and System.UInt64, respectively.
Example:
long bigNumber = 9223372036854775807;
By using the long data type, or ulong for unsigned.
Table of Integral C# Types
I know that the Sql equivalent of Int16 is SqlInt16.
But what is the Sql equivalent of UInt16, UInt32 and Uint64?
Except for tinyint, there are no native unsigned types in SQL server so there is no good equivalent. The best you can do is use a bigger precision number and add a constraint on the permissible values.