Why Int64 and UInt64 are of same size - c#

I'm asking in context of C#. Why both Int64 and UInt64 are of same size i.e. 64-bits. Same goes to other Int variables also.
I was expecting UInt to be smaller in size at least a bit as it do not need to represent the - sign, am I expecting some illogical thing here?

The 64 in Int64 and UInt64 means the size of the integer is 64 bits. That's why they are the same size.
The size is the same, but range is different (also due to the significant bit):
UInt64: 0 to 18,446,744,073,709,551,615;
Int64: -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.

The fundamental reason is that modern CPUs are 64-bit ones.
So it's far more easier to implement
64-bit integer - one CPU register (say RAX)
32-bit integer - half CPU register (EAX)
16-bit integer - 1/4 (AX)
8-bit integer - 1/8 (AH)
otherwise (for say 96-bit) you have to design your own support for, say, integer overflow detection etc. which will be slow.

It all comes down to what the most significant bit is used for. In signed data it indicates the sign; in unsigned data it is just another bit. This means that unsigned data can represent values twice as large in magnitude, but only represent positive values (and zero) - for example, in the case of 64-bits, unsigned values are 0 to 18,446,744,073,709,551,615 - where-as signed values are -9,223,372,036,854,775,808 thru 9,223,372,036,854,775,807.
It is easier to see for 8-bit: -128 to 127 versus 0 to 255. The number of possible values is identical.

Here there is the Integral Types Table for C#. If you look there you can see:
long -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 Signed 64-bit integer
ulong 0 to 18,446,744,073,709,551,615 Unsigned 64-bit integer
If you look, the "range" of both is the same, 2 ^ 64 (including the 0)
Probably easier with smaller data types:
sbyte -128 to 127 Signed 8-bit integer
byte 0 to 255 Unsigned 8-bit integer
Both have a "range" of 256 values, both of them are 8 bits, and 2^8 == 256
Note that signed and unsigned types have the same range of values because historically nearly all the processors use two's complement for numbers. If they had used ones' complement then the effective range of signed numbers would be one less than the range of unsigned numbers, because there would be two 0 values (positive and negative 0)

Related

I am reading floating point values from a PLC's 32 bit memory area as (they would be) Integer values. How can I convert this Integer to Single in C#?

I am reading values from a S7-300 PLC with my c# code. When the values are in INT format there is no problem. But there are some 32 bit memory areas (Double Words) that are encoded in IEEE 754 Floating-Point standart. (First bit is sign bit, the next 8 bits exponent, and the remaining 23 bits Mantissa)
I can read out this memory areas from the PLC only as Int32 (As they were integer).
How can I convert this as integer read value to a single Real value in C# with respeect of the IEEE 754 Floating-Point encoding in the double word?
It worked just as wanted with Eldar's answer.
If you read a 32 bit float value as bit, then just convert it like this:
Thanks again to Eldar :-)
var finalSingle= BitConverter.ToSingle(BitConverter.GetBytes(s7Int))

What are the actual ranges of floating point and double data types in C#?

I'm learning C# and trying to get a logical visual representation of the actual range of data types in C#.
I have moved through the integers and am now up to float and double data types.
8 bits (1 byte), sbyte, -128 to 127.
8 bits (1 byte), byte, 0 to 255.
16 bits (2 bytes), short, -32,768 to 32,767.
16 bits (2 bytes), ushort, 0 to 65535.
32 bits (4 bytes), int, -2,147,483,648 to 2,147,483,647.
32 bits (4 bytes), uint, 0 to 4,294,967,295.
64 bits (8 bytes), long, -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
64 bits (8 bytes), ulong, 0 to 18,446,744,073,709,551,615.
Here are the references to float and double data types sizes at msdn:
Float: http://msdn.microsoft.com/en-us/library/b1e65aza(v=vs.110).aspx
Double: http://msdn.microsoft.com/en-us/library/678hzkk9.aspx
So, trying to keep with the convention of specifiying the actual range of numbers as in the numbered list above, what do these two ranges actually represent?
The ranges are actually –infinity to +infinity.
The largest finite float is 340282346638528859811704183484516925440. This is 2128–2128–24.
The largest finite double is 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368. This is 21024–21024–53.
The ranges are represented in "exponential format" for conciseness. For example, +1.7e+308 means 17 followed by 307 zeros:
1,700,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000
So the exponential format is preferred for such large numbers. And the same goes for extremely small numbers.
Also, take a look at this reading by Jon Skeet.

Check if decimal contains decimal places by looking at the bytes

There is a similar question in here. Sometimes that solution gives exceptions because the numbers might be to large.
I think that if there is a way of looking at the bytes of a decimal number it will be more efficient. For example a decimal number has to be represented by some n number of bytes. For example an Int32 is represented by 32 bits and all the numbers that start with the bit of 1 are negative. Maybe there is some kind of similar relationship with decimal numbers. How could you look at the bytes of a decimal number? or the bytes of an integer number?
If you are really talking about decimal numbers (as opposed to floating-point numbers), then Decimal.GetBits will let you look at the individual bits of a decimal. The MSDN page also contains a description of the meaning of the bits.
On the other hand, if you just want to check whether a number has a fractional part or not, doing a simple
var hasFractionalPart = (myValue - Math.Round(myValue) != 0)
is much easier than decoding the binary structure. This should work for decimals as well as classic floating-point data types such as float or double. In the latter case, due to floating-point rounding error, it might make sense to check for Math.Abs(myValue - Math.Round(myValue)) < someThreshold instead of comparing to 0.
If you want a reasonably efficient way of getting the 'decimal' value of a decimal type you can just mod it by one.
decimal number = 4.75M;
decimal fractionalPart = number % 1;
Console.WriteLine(fractionalPart); //will print 0.75
While it may not be the theoretically optimal solution, it'll be quite fast, and almost certainly fast enough for your purposes (far better than string manipulation and parsing, which is a common naive approach).
You can use Decimal.GetBits in order to retrieve the bits from a decimal structure.
The MSDN page linked above details how they are laid out in memory:
The binary representation of a Decimal number consists of a 1-bit sign, a 96-bit integer number, and a scaling factor used to divide the integer number and specify what portion of it is a decimal fraction. The scaling factor is implicitly the number 10, raised to an exponent ranging from 0 to 28.
The return value is a four-element array of 32-bit signed integers.
The first, second, and third elements of the returned array contain the low, middle, and high 32 bits of the 96-bit integer number.
The fourth element of the returned array contains the scale factor and sign. It consists of the following parts:
Bits 0 to 15, the lower word, are unused and must be zero.
Bits 16 to 23 must contain an exponent between 0 and 28, which indicates the power of 10 to divide the integer number.
Bits 24 to 30 are unused and must be zero.
Bit 31 contains the sign; 0 meaning positive, and 1 meaning negative.
Going with Oded's detailed info to use GetBits, I came up with this
const int EXP_MASK = 0x00FF0000;
bool hasDecimal = (Decimal.GetBits(value)[3] & EXP_MASK) != 0x0;

What is the difference between “int” and “uint” / “long” and “ulong”?

I know about int and long (32-bit and 64-bit numbers), but what are uint and ulong?
The primitive data types prefixed with "u" are unsigned versions with the same bit sizes. Effectively, this means they cannot store negative numbers, but on the other hand they can store positive numbers twice as large as their signed counterparts. The signed counterparts do not have "u" prefixed.
The limits for int (32 bit) are:
int: –2147483648 to 2147483647
uint: 0 to 4294967295
And for long (64 bit):
long: -9223372036854775808 to 9223372036854775807
ulong: 0 to 18446744073709551615
uint and ulong are the unsigned versions of int and long. That means they can't be negative. Instead they have a larger maximum value.
Type Min Max CLS-compliant
int -2,147,483,648 2,147,483,647 Yes
uint 0 4,294,967,295 No
long –9,223,372,036,854,775,808 9,223,372,036,854,775,807 Yes
ulong 0 18,446,744,073,709,551,615 No
To write a literal unsigned int in your source code you can use the suffix u or U for example 123U.
You should not use uint and ulong in your public interface if you wish to be CLS-Compliant.
Read the documentation for more information:
int
uint
long
ulong
By the way, there is also short and ushort and byte and sbyte.
The difference is that the uint and ulong are unsigned data types, meaning the range is different: They do not accept negative values:
int range: -2,147,483,648 to 2,147,483,647
uint range: 0 to 4,294,967,295
long range: –9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
ulong range: 0 to 18,446,744,073,709,551,615
u means unsigned, so ulong is a large number without sign. You can store a bigger value in ulong than long, but no negative numbers allowed.
A long value is stored in 64-bit,with its first digit to show if it's a positive/negative number. while ulong is also 64-bit, with all 64 bit to store the number. so the maximum of ulong is 2(64)-1, while long is 2(63)-1.
Based on the other answers here and a little review you can understand it this way: unsigned is in reference to the assignment of a negative or positive explicit assignment (think the "-" in -1) and the inability to have negative versions of said numbers.
And because of this capacity on the negative end being removed as an option they instead allocated that capacity to the positive end hence the doubling of the positive valuation's maximum value. So instead of the bit range being split along positive and negative valuations, they are instead for ushort, uint, along, etc allocated to the positive end of the valuation.
It's been a while since I C++'d but these answers are off a bit.
As far as the size goes, 'int' isn't anything. It's a notional value of a standard integer; assumed to be fast for purposes of things like iteration. It doesn't have a preset size.
So, the answers are correct with respect to the differences between int and uint, but are incorrect when they talk about "how large they are" or what their range is. That size is undefined, or more accurately, it will change with the compiler and platform.
It's never polite to discuss the size of your bits in public.
When you compile a program, int does have a size, as you've taken the abstract C/C++ and turned it into concrete machine code.
So, TODAY, practically speaking with most common compilers, they are correct. But do not assume this.
Specifically: if you're writing a 32 bit program, int will be one thing, 64 bit, it can be different, and 16 bit is different. I've gone through all three and briefly looked at 6502 shudder
A brief google search shows this:
https://www.tutorialspoint.com/cprogramming/c_data_types.htm
This is also good info:
https://docs.oracle.com/cd/E19620-01/805-3024/lp64-1/index.html
use int if you really don't care how large your bits are; it can change.
Use size_t and ssize_t if you want to know how large something is.
If you're reading or writing binary data, don't use int. Use a (usually platform/source dependent) specific keyword. WinSDK has plenty of good, maintainable examples of this. Other platforms do too.
I've spent a LOT of time going through code from people that "SMH" at the idea that this is all just academic/pedantic. These ate the people that write unmaintainable code. Sure, it's easy to use type 'int' and use it without all the extra darn typing. It's a lot of work to figure out what they really meant, and a bit mind-numbing.
It's fragile coding when you mix int and assume sizes.
use int and uint when you just want a fast integer and don't care about the range (other than signed/unsigned).

Is an int a 64-bit integer in 64-bit C#?

In my C# source code I may have declared integers as:
int i = 5;
or
Int32 i = 5;
In the currently prevalent 32-bit world they are equivalent. However, as we move into a 64-bit world, am I correct in saying that the following will become the same?
int i = 5;
Int64 i = 5;
No. The C# specification rigidly defines that int is an alias for System.Int32 with exactly 32 bits. Changing this would be a major breaking change.
The int keyword in C# is defined as an alias for the System.Int32 type and this is (judging by the name) meant to be a 32-bit integer. To the specification:
CLI specification section 8.2.2 (Built-in value and reference types) has a table with the following:
System.Int32 - Signed 32-bit integer
C# specification section 8.2.1 (Predefined types) has a similar table:
int - 32-bit signed integral type
This guarantees that both System.Int32 in CLR and int in C# will always be 32-bit.
Will sizeof(testInt) ever be 8?
No, sizeof(testInt) is an error. testInt is a local variable. The sizeof operator requires a type as its argument. This will never be 8 because it will always be an error.
VS2010 compiles a c# managed integer as 4 bytes, even on a 64 bit machine.
Correct. I note that section 18.5.8 of the C# specification defines sizeof(int) as being the compile-time constant 4. That is, when you say sizeof(int) the compiler simply replaces that with 4; it is just as if you'd said "4" in the source code.
Does anyone know if/when the time will come that a standard "int" in C# will be 64 bits?
Never. Section 4.1.4 of the C# specification states that "int" is a synonym for "System.Int32".
If what you want is a "pointer-sized integer" then use IntPtr. An IntPtr changes its size on different architectures.
int is always synonymous with Int32 on all platforms.
It's very unlikely that Microsoft will change that in the future, as it would break lots of existing code that assumes int is 32-bits.
I think what you may be confused by is that int is an alias for Int32 so it will always be 4 bytes, but IntPtr is suppose to match the word size of the CPU architecture so it will be 4 bytes on a 32-bit system and 8 bytes on a 64-bit system.
According to the C# specification ECMA-334, section "11.1.4 Simple Types", the reserved word int will be aliased to System.Int32. Since this is in the specification it is very unlikely to change.
No matter whether you're using the 32-bit version or 64-bit version of the CLR, in C# an int will always mean System.Int32 and long will always mean System.Int64.
The following will always be true in C#:
sbyte signed 8 bits, 1 byte
byte unsigned 8 bits, 1 byte
short signed 16 bits, 2 bytes
ushort unsigned 16 bits, 2 bytes
int signed 32 bits, 4 bytes
uint unsigned 32 bits, 4 bytes
long signed 64 bits, 8 bytes
ulong unsigned 64 bits, 8 bytes
An integer literal is just a sequence of digits (eg 314159) without any of these explicit types. C# assigns it the first type in the sequence (int, uint, long, ulong) in which it fits. This seems to have been slightly muddled in at least one of the responses above.
Weirdly the unary minus operator (minus sign) showing up before a string of digits does not reduce the choice to (int, long). The literal is always positive; the minus sign really is an operator. So presumably -314159 is exactly the same thing as -((int)314159). Except apparently there's a special case to get -2147483648 straight into an int; otherwise it'd be -((uint)2147483648). Which I presume does something unpleasant.
Somehow it seems safe to predict that C# (and friends) will never bother with "squishy name" types for >=128 bit integers. We'll get nice support for arbitrarily large integers and super-precise support for UInt128, UInt256, etc. as soon as processors support doing math that wide, and hardly ever use any of it. 64-bit address spaces are really big. If they're ever too small it'll be for some esoteric reason like ASLR or a more efficient MapReduce or something.
Yes, as Jon said, and unlike the 'C/C++ world', Java and C# aren't dependent on the system they're running on. They have strictly defined lengths for byte/short/int/long and single/double precision floats, equal on every system.
int without suffix can be either 32bit or 64bit, it depends on the value it represents.
as defined in MSDN:
When an integer literal has no suffix, its type is the first of these types in which its value can be represented: int, uint, long, ulong.
Here is the address:
https://msdn.microsoft.com/en-us/library/5kzh1b5w.aspx

Categories

Resources