using int (32-bit) vs cast to ushort - c#

I have an array of bytes and a struct. I want to set some of struct members to some of array members this way:
mystruct.m1 = (myarr[0] << 8) | myarr[1]; //join bytes
So mystruct.m1 is going to be a 16-bit integer (ushort). However, if I define mystruct.m1 as ushort, I should cast (myarr[0] << 8) | myarr[1] to ushort or Visual Studio says cannot implicitly convert type 'int' to 'ushort'. But if I use normal int, no cast is needed.
Which one is better and more efficient?

A byte in C# is unsigned (values 0 to 255). Therefore I would go with ushort as well.
Which one is better?
The one that provides better semantics to future readers. With an ushort, it's immediately clear that only 2 bytes will fit in. If you use an int instead, one might wonder what the other 16 bits are good for.
Also, with int, bit-shift operations may work differently in case the value is negative for whatever reason.
Which one is more efficient?
Why care about efficiency if you have no performance problem?

Related

Binary Shift Differences between VB.NET and C#

I just found an interesting problem between translating some data:
VB.NET: CByte(4) << 8 Returns 4
But C#: (byte)4 << 8 Returns 1024
Namely, why does VB.NET: (CByte(4) << 8).GetType() return type {Name = "Byte" FullName = "System.Byte"}
Yet C#: ((byte)4 << 8).GetType() returns type {Name = "Int32" FullName = "System.Int32"}
Is there a reason why these two treat the binary shift the same? Following from that, is there any way to make the C# bit shift perform the same as VB.NET (to make VB.NET perform like C# you just do CInt(_____) << 8)?
According to http://msdn.microsoft.com/en-us/library/a1sway8w.aspx byte does not have << defined on it for C# (only int, uint, long and ulong. This means that it will use an implciit conversion to a type that it can use so it converts it to int before doing the bit shift.
http://msdn.microsoft.com/en-us/library/7haw1dex.aspx says that VB defines the operation on Bytes. To prevent overflow it applies a mask to your shift to bring it within an appropriate range so it is actually in this case shifting by nothing at all.
As to why C# doesn't define shifting on bytes I can't tell you.
To actually make it behave the same for other datatypes you need to just mask your shift number by 7 for bytes or 15 for shorts (see second link for info).
To apply the same in C#, you would use
static byte LeftShiftVBStyle(byte value, int count)
{
return (byte)(value << (count & 7));
}
as for why VB took that approach.... just different language, different rules (it is a natural extension of the way C# handles shifting of int/&31 and long/&63, to be fair).
Chris already nailed it, vb.net has defined shift operators for the Byte and Short types, C# does not. The C# spec is very similar to C and also a good match for the MSIL definitions for OpCodes.Shl, Shr and Shr_Un, they only accept int32, int64 and intptr operands. Accordingly, any byte or short sized operands are first converted to int32 with their implicit conversion.
That's a limitation that the vb.net compiler has to work with, it needs to generate extra code to make the byte and short specific versions of the operators work. The byte operator is implemented like this:
Dim result As Byte = CByte(leftOperand << (rightOperand And 7))
and the short operator:
Dim result As Short = CShort(leftOperand << (rightOperand And 15))
The corresponding C# operation is:
Dim result As Integer = CInt(leftOperand) << CInt(rightOperand)
Or CLng() if required. Implicit in C# code is that the programmer always has to cast the result back to the desired result type. There are a lot of SO questions about that from programmers that don't think that's very intuitive. VB.NET has another feature that makes automatic casting more survivable, it has overflow checking enabled by default. Although that's not applicable to shifts.

Why are the unsigned CLR types so difficult to use in C#?

I came from a mostly C/C++ background before I began using C#. One of the things I did with my first project in C# was make a class like this
class Element{
public uint Size;
public ulong BigThing;
}
I was then mortified by the fact that this requires casting:
int x=MyElement.Size;
as does
int x=5;
uint total=MyElement.Size+x;
Why did the language designers decide to make the signed and unsigned integer types not implicitly castable? And why are the unsigned types not used more throughout the .Net library? For instance String.Length can never be negative, yet it is a signed integer.
Why did the language designers decide to make the signed and unsigned integer types not
implicitly castable?
Because that could lose data or throw any exception, neither of which is generally a good thing to allow implicitly. (The implicit conversion from long to double can lose data too, admittedly, but in a different way.)
And why are the unsigned types not used more throughout the .Net library
Unsigned types aren't CLS-compliant - not all .NET languages have always supported them. For example, Visual Basic didn't have "native" support for unsigned data types in .NET 1.0 and 1.1; it was added to the language for 2.0. (You could still use them, but they weren't part of the language itself - you couldn't use the normal arithmetic operators, for example.)
Along with Jon's answer, just because an unsigned number can't be negative doesn't mean it isn't bigger than a signed one. uint is 0 to 4,294,967,295 but int is -2,147,483,648 to 2,147,483,647. Plenty of room above int's max for loss.
Because implicitly converting an unsigned integer of 3B into an signed integer is going to blow up.
Unsigned has twice the maximum value of signed. It's the same reason you can't cast a long to an int.
I was then mortified by the fact that this requires casting:
int x=MyElement.Size;
But you are contradicting yourself here. If you really (really) need Size to be unsigned than assigning it to (signed) x is an error. A deep flaw in your code.
For instance String.Length can never be negative, yet it is a signed integer
But String.IndexOf can return a negative number, and it would be awkward if String.Length and Index values where of different types.
And while in theory there would be merit in an unsigned String.Length (4 GB cap), in practice even the current 2GB is large enough (because strings of that length are rare and unworkable anyway).
So the real answer is: Why use unsigned in the first place?
On the second count: because they wanted the CLR to be compatible with languages that don't have unsigned datatypes (read: VB.NET).

Will a c# "int" ever be 64 bits? [duplicate]

In my C# source code I may have declared integers as:
int i = 5;
or
Int32 i = 5;
In the currently prevalent 32-bit world they are equivalent. However, as we move into a 64-bit world, am I correct in saying that the following will become the same?
int i = 5;
Int64 i = 5;
No. The C# specification rigidly defines that int is an alias for System.Int32 with exactly 32 bits. Changing this would be a major breaking change.
The int keyword in C# is defined as an alias for the System.Int32 type and this is (judging by the name) meant to be a 32-bit integer. To the specification:
CLI specification section 8.2.2 (Built-in value and reference types) has a table with the following:
System.Int32 - Signed 32-bit integer
C# specification section 8.2.1 (Predefined types) has a similar table:
int - 32-bit signed integral type
This guarantees that both System.Int32 in CLR and int in C# will always be 32-bit.
Will sizeof(testInt) ever be 8?
No, sizeof(testInt) is an error. testInt is a local variable. The sizeof operator requires a type as its argument. This will never be 8 because it will always be an error.
VS2010 compiles a c# managed integer as 4 bytes, even on a 64 bit machine.
Correct. I note that section 18.5.8 of the C# specification defines sizeof(int) as being the compile-time constant 4. That is, when you say sizeof(int) the compiler simply replaces that with 4; it is just as if you'd said "4" in the source code.
Does anyone know if/when the time will come that a standard "int" in C# will be 64 bits?
Never. Section 4.1.4 of the C# specification states that "int" is a synonym for "System.Int32".
If what you want is a "pointer-sized integer" then use IntPtr. An IntPtr changes its size on different architectures.
int is always synonymous with Int32 on all platforms.
It's very unlikely that Microsoft will change that in the future, as it would break lots of existing code that assumes int is 32-bits.
I think what you may be confused by is that int is an alias for Int32 so it will always be 4 bytes, but IntPtr is suppose to match the word size of the CPU architecture so it will be 4 bytes on a 32-bit system and 8 bytes on a 64-bit system.
According to the C# specification ECMA-334, section "11.1.4 Simple Types", the reserved word int will be aliased to System.Int32. Since this is in the specification it is very unlikely to change.
No matter whether you're using the 32-bit version or 64-bit version of the CLR, in C# an int will always mean System.Int32 and long will always mean System.Int64.
The following will always be true in C#:
sbyte signed 8 bits, 1 byte
byte unsigned 8 bits, 1 byte
short signed 16 bits, 2 bytes
ushort unsigned 16 bits, 2 bytes
int signed 32 bits, 4 bytes
uint unsigned 32 bits, 4 bytes
long signed 64 bits, 8 bytes
ulong unsigned 64 bits, 8 bytes
An integer literal is just a sequence of digits (eg 314159) without any of these explicit types. C# assigns it the first type in the sequence (int, uint, long, ulong) in which it fits. This seems to have been slightly muddled in at least one of the responses above.
Weirdly the unary minus operator (minus sign) showing up before a string of digits does not reduce the choice to (int, long). The literal is always positive; the minus sign really is an operator. So presumably -314159 is exactly the same thing as -((int)314159). Except apparently there's a special case to get -2147483648 straight into an int; otherwise it'd be -((uint)2147483648). Which I presume does something unpleasant.
Somehow it seems safe to predict that C# (and friends) will never bother with "squishy name" types for >=128 bit integers. We'll get nice support for arbitrarily large integers and super-precise support for UInt128, UInt256, etc. as soon as processors support doing math that wide, and hardly ever use any of it. 64-bit address spaces are really big. If they're ever too small it'll be for some esoteric reason like ASLR or a more efficient MapReduce or something.
Yes, as Jon said, and unlike the 'C/C++ world', Java and C# aren't dependent on the system they're running on. They have strictly defined lengths for byte/short/int/long and single/double precision floats, equal on every system.
int without suffix can be either 32bit or 64bit, it depends on the value it represents.
as defined in MSDN:
When an integer literal has no suffix, its type is the first of these types in which its value can be represented: int, uint, long, ulong.
Here is the address:
https://msdn.microsoft.com/en-us/library/5kzh1b5w.aspx

Casting a char to an unsigned short: what happens behind the scenes?

Given this field:
char lookup_ext[8192] = {0}; // Gets filled later
And this statement:
unsigned short *slt = (unsigned short*) lookup_ext;
What happens behind the scenes?
lookup_ext[1669] returns 67 = 0100 0011 (C), lookup_ext[1670] returns 78 = 0100 1110 (N) and lookup_ext[1671] returns 68 = 0100 0100 (D); yet slt[1670] returns 18273 = 0100 0111 0110 0001.
I'm trying to port this to C#, so besides an easy way out of this, I'm also wondering what really happens here. Been a while since I used C++ regularly.
Thanks!
The statement that you show doesn't cast a char to an unsigned short, it casts a pointer to a char to a pointer to an unsigned short. This means that the usual arithmetic conversions of the pointed-to-data are not going to happen and that the underlying char data will just be interpreted as unsigned shorts when accessed through the slt variable.
Note that sizeof(unsigned short) is unlikely to be one, so that slt[1670] won't necessarily correspond to lookup_ext[1670]. It is more likely - if, say, sizeof(unsigned short) is two - to correspond to lookup_ext[3340] and lookup_ext[3341].
Do you know why the original code is using this aliasing? If it's not necessary, it might be worth trying to make the C++ code cleaner and verifying that the behaviour is unchanged before porting it.
If I understand correctly, the type conversion will be converting a char array of size 8192 to a short int array of size half of that, which is 4096.
So I don't understand what you are comparing in your question. slt[1670] should correspond to lookup_ext[1670*2] and lookup_ext[1670*2+1].
Well, this statement
char lookup_ext[8192] = {0}; // Gets filled later
Creates an array either locally or non-locally, depending on where the definition occurs. Initializing it like that, with an aggregate initializer will initialize all its elements to zero (the first explicitly, the remaining ones implicitly). Therefore i wonder why your program outputs non-zero values. Unless the fill happens before the read, then that makes sense.
unsigned short *slt = (unsigned short*) lookup_ext;
That will interpret the bytes making up the array as unsigned short objects when you read from that pointer's target. Strictly speaking, the above is undefined behavior, because you can't be sure the array is suitable aligned, and you would read from a pointer that's not pointing at the type of the original pointed type (unsigned char <-> unsigned short). In C++, the only portable way to read the value out of some other pod (plain old data. that's all the structs and simple types that are possible in C too (such as short), broadly speaking) is by using such library functions as memcpy or memmove.
So if you read *slt above, you would interpret the first sizeof(*slt) bytes of the array, and try to read it as unsigned short (that's called type pun).
When you do "unsigned short slt = (unsigned short) lookup_ext;", the no. of bytes equivalent to the size of (unsigned short) are picked up from the location given by lookup_ext, and stored at the location pointed to by slt. Since unsigned short would be 2 bytes, first two bytes from lookup_ext would be stored in the location given by slt.

Is an int a 64-bit integer in 64-bit C#?

In my C# source code I may have declared integers as:
int i = 5;
or
Int32 i = 5;
In the currently prevalent 32-bit world they are equivalent. However, as we move into a 64-bit world, am I correct in saying that the following will become the same?
int i = 5;
Int64 i = 5;
No. The C# specification rigidly defines that int is an alias for System.Int32 with exactly 32 bits. Changing this would be a major breaking change.
The int keyword in C# is defined as an alias for the System.Int32 type and this is (judging by the name) meant to be a 32-bit integer. To the specification:
CLI specification section 8.2.2 (Built-in value and reference types) has a table with the following:
System.Int32 - Signed 32-bit integer
C# specification section 8.2.1 (Predefined types) has a similar table:
int - 32-bit signed integral type
This guarantees that both System.Int32 in CLR and int in C# will always be 32-bit.
Will sizeof(testInt) ever be 8?
No, sizeof(testInt) is an error. testInt is a local variable. The sizeof operator requires a type as its argument. This will never be 8 because it will always be an error.
VS2010 compiles a c# managed integer as 4 bytes, even on a 64 bit machine.
Correct. I note that section 18.5.8 of the C# specification defines sizeof(int) as being the compile-time constant 4. That is, when you say sizeof(int) the compiler simply replaces that with 4; it is just as if you'd said "4" in the source code.
Does anyone know if/when the time will come that a standard "int" in C# will be 64 bits?
Never. Section 4.1.4 of the C# specification states that "int" is a synonym for "System.Int32".
If what you want is a "pointer-sized integer" then use IntPtr. An IntPtr changes its size on different architectures.
int is always synonymous with Int32 on all platforms.
It's very unlikely that Microsoft will change that in the future, as it would break lots of existing code that assumes int is 32-bits.
I think what you may be confused by is that int is an alias for Int32 so it will always be 4 bytes, but IntPtr is suppose to match the word size of the CPU architecture so it will be 4 bytes on a 32-bit system and 8 bytes on a 64-bit system.
According to the C# specification ECMA-334, section "11.1.4 Simple Types", the reserved word int will be aliased to System.Int32. Since this is in the specification it is very unlikely to change.
No matter whether you're using the 32-bit version or 64-bit version of the CLR, in C# an int will always mean System.Int32 and long will always mean System.Int64.
The following will always be true in C#:
sbyte signed 8 bits, 1 byte
byte unsigned 8 bits, 1 byte
short signed 16 bits, 2 bytes
ushort unsigned 16 bits, 2 bytes
int signed 32 bits, 4 bytes
uint unsigned 32 bits, 4 bytes
long signed 64 bits, 8 bytes
ulong unsigned 64 bits, 8 bytes
An integer literal is just a sequence of digits (eg 314159) without any of these explicit types. C# assigns it the first type in the sequence (int, uint, long, ulong) in which it fits. This seems to have been slightly muddled in at least one of the responses above.
Weirdly the unary minus operator (minus sign) showing up before a string of digits does not reduce the choice to (int, long). The literal is always positive; the minus sign really is an operator. So presumably -314159 is exactly the same thing as -((int)314159). Except apparently there's a special case to get -2147483648 straight into an int; otherwise it'd be -((uint)2147483648). Which I presume does something unpleasant.
Somehow it seems safe to predict that C# (and friends) will never bother with "squishy name" types for >=128 bit integers. We'll get nice support for arbitrarily large integers and super-precise support for UInt128, UInt256, etc. as soon as processors support doing math that wide, and hardly ever use any of it. 64-bit address spaces are really big. If they're ever too small it'll be for some esoteric reason like ASLR or a more efficient MapReduce or something.
Yes, as Jon said, and unlike the 'C/C++ world', Java and C# aren't dependent on the system they're running on. They have strictly defined lengths for byte/short/int/long and single/double precision floats, equal on every system.
int without suffix can be either 32bit or 64bit, it depends on the value it represents.
as defined in MSDN:
When an integer literal has no suffix, its type is the first of these types in which its value can be represented: int, uint, long, ulong.
Here is the address:
https://msdn.microsoft.com/en-us/library/5kzh1b5w.aspx

Categories

Resources