This question already has answers here:
Why is the maximum value of an unsigned n-bit integer 2ⁿ-1 and not 2ⁿ?
(12 answers)
Closed 5 years ago.
I am wondering why largest possible value of Int32 in .NET is 2147483647 but not 2147483648. Because 2³¹ = 2147483648.
Thank you
An Int32 is stored in 32 bits, not 31 bits, and half of its range is taken by negative numbers. Out of the remaining range, you lose one value to zero, leaving 2147483647 as the highest positive number.
The range for an Int32 is -2147483648 to 2147483647.
It also includes zero 0 in the positive range. Hence the range is 0 to 2147483647 and since zero has been considered in the positive side hence towards negative side from '-1' to -2147483648.
So overall positive and negative side takes equal number of values.
The actual data of Int32 is stored in 31 bits, because regular Int32 should also hold negative numbers, 1 bit is used as a sign bit, 31 bits that remain are used as data. However, if you use unsigned Int32, then you will have a complete 32 bits of data in it.
It mean int can have maximum 2147483648 positive values starting from 0 to 2147483647, The int.Min is -2,147,483,648 as it does not include the 0
Related
If I have integer variable with maximum value assigned it which is (2,147,483,647) for 32 bit integer, and if I am incrementing it by 1 then it turn to (-2147483648) negative value
code
int i = int.MaxValue; // i = 2,147,483,647
i = i + 1;
Response.Write(i); // i = -2,147,483,648
Can anyone explain me?
I did not find the exact reason for this change in values.
This is just integer overflow, where the value is effectively leaking into the sign bit. It's simpler to reason about it with sbyte, for example. Think about the bitwise representations of 127 and -127 as signed bytes:
127: 01111111
-128: 10000000
Basically the addition is performed as if with an infinite range, and then the result is truncated to the appropriate number of bits, and that value "interpreted" according to its type.
Note that this is all if you're executing in an unchecked context. In a checked context, an OverflowException will be thrown instead.
From section 7.8.4 of the C# 5 specification:
In a checked context, if the sum is outside the range of the result type, a System.OverflowException is thrown. In an unchecked context, overflows are not reported and any significant high-order bits outside the range of the result type are discarded.
in signed int, first bit shows the sign and the rests shows the number:
so, in 32bit int, first bit is the sign, so maxInt is: 2147483647 or 01111111111111111111111111111111,
and if we increment this number by 1 it will become: 10000000000000000000000000000000 which is - 2147483647
I'm asking in context of C#. Why both Int64 and UInt64 are of same size i.e. 64-bits. Same goes to other Int variables also.
I was expecting UInt to be smaller in size at least a bit as it do not need to represent the - sign, am I expecting some illogical thing here?
The 64 in Int64 and UInt64 means the size of the integer is 64 bits. That's why they are the same size.
The size is the same, but range is different (also due to the significant bit):
UInt64: 0 to 18,446,744,073,709,551,615;
Int64: -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.
The fundamental reason is that modern CPUs are 64-bit ones.
So it's far more easier to implement
64-bit integer - one CPU register (say RAX)
32-bit integer - half CPU register (EAX)
16-bit integer - 1/4 (AX)
8-bit integer - 1/8 (AH)
otherwise (for say 96-bit) you have to design your own support for, say, integer overflow detection etc. which will be slow.
It all comes down to what the most significant bit is used for. In signed data it indicates the sign; in unsigned data it is just another bit. This means that unsigned data can represent values twice as large in magnitude, but only represent positive values (and zero) - for example, in the case of 64-bits, unsigned values are 0 to 18,446,744,073,709,551,615 - where-as signed values are -9,223,372,036,854,775,808 thru 9,223,372,036,854,775,807.
It is easier to see for 8-bit: -128 to 127 versus 0 to 255. The number of possible values is identical.
Here there is the Integral Types Table for C#. If you look there you can see:
long -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 Signed 64-bit integer
ulong 0 to 18,446,744,073,709,551,615 Unsigned 64-bit integer
If you look, the "range" of both is the same, 2 ^ 64 (including the 0)
Probably easier with smaller data types:
sbyte -128 to 127 Signed 8-bit integer
byte 0 to 255 Unsigned 8-bit integer
Both have a "range" of 256 values, both of them are 8 bits, and 2^8 == 256
Note that signed and unsigned types have the same range of values because historically nearly all the processors use two's complement for numbers. If they had used ones' complement then the effective range of signed numbers would be one less than the range of unsigned numbers, because there would be two 0 values (positive and negative 0)
What is the value of j?
Int32 i = Int32.MinValue;
Int32 j = -i;
Compiling in checked context will throw an exception.
In unchecked we obtain value Int32.MinValue.
But why so?
Here is min and max Int32 values:
Dec Bin (first bit is a sign bit)
Int32.MinValue -2147483648 10000000000000000000000000000000
Int32.MaxValue 2147483647 01111111111111111111111111111111
When you are trying to get -(-2147483648) with explicit overflow checking you get exception in this case, because 2147483648 is bigger than max allowed value for int type.
Then why you are getting MinValue when overflow is allowed? Because when Int32.MinValue is negated you have 2147483648 which has binary representation 10000000000000000000000000000000 but with signed integer first bit is a sign bit, so you get exactly Int32.Min value.
So, the problem here is treating first bit as number sign. If you would assign result of negated Int32.Min value to unsigned integer, you would get 2147483648 value, as expected:
Int32 i = Int32.MinValue;
UInt32 j = (UInt32)-i; // it has 32 bits and does not treat first bit as sign
This is an example of integer overflow. In a checked context, the integer overflow will be detected and converted to an exception, because the language designers decided so.
To explain integer overflow, you can do the calculation in binary by hand. To calculate -X, take X in binary, change all the 1's to 0's and 0's to 1's, then add 1 (the number 1, not a 1 bit).
Example: 5 = 00000000000000000000000000000101
flip all bits: 11111111111111111111111111111010
add one: 11111111111111111111111111111011 which is -5
Int32.MinValue = 10000000000000000000000000000000
flip all bits: 01111111111111111111111111111111
add one: 10000000000000000000000000000000
If you take Int32.MinValue and negate it, it doesn't change. -Int32.MinValue can't fit in an int - if you do -(Int64)Int32.MinValue it will work as expected.
I'm learning C# and trying to get a logical visual representation of the actual range of data types in C#.
I have moved through the integers and am now up to float and double data types.
8 bits (1 byte), sbyte, -128 to 127.
8 bits (1 byte), byte, 0 to 255.
16 bits (2 bytes), short, -32,768 to 32,767.
16 bits (2 bytes), ushort, 0 to 65535.
32 bits (4 bytes), int, -2,147,483,648 to 2,147,483,647.
32 bits (4 bytes), uint, 0 to 4,294,967,295.
64 bits (8 bytes), long, -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
64 bits (8 bytes), ulong, 0 to 18,446,744,073,709,551,615.
Here are the references to float and double data types sizes at msdn:
Float: http://msdn.microsoft.com/en-us/library/b1e65aza(v=vs.110).aspx
Double: http://msdn.microsoft.com/en-us/library/678hzkk9.aspx
So, trying to keep with the convention of specifiying the actual range of numbers as in the numbered list above, what do these two ranges actually represent?
The ranges are actually –infinity to +infinity.
The largest finite float is 340282346638528859811704183484516925440. This is 2128–2128–24.
The largest finite double is 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368. This is 21024–21024–53.
The ranges are represented in "exponential format" for conciseness. For example, +1.7e+308 means 17 followed by 307 zeros:
1,700,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000
So the exponential format is preferred for such large numbers. And the same goes for extremely small numbers.
Also, take a look at this reading by Jon Skeet.
There is a similar question in here. Sometimes that solution gives exceptions because the numbers might be to large.
I think that if there is a way of looking at the bytes of a decimal number it will be more efficient. For example a decimal number has to be represented by some n number of bytes. For example an Int32 is represented by 32 bits and all the numbers that start with the bit of 1 are negative. Maybe there is some kind of similar relationship with decimal numbers. How could you look at the bytes of a decimal number? or the bytes of an integer number?
If you are really talking about decimal numbers (as opposed to floating-point numbers), then Decimal.GetBits will let you look at the individual bits of a decimal. The MSDN page also contains a description of the meaning of the bits.
On the other hand, if you just want to check whether a number has a fractional part or not, doing a simple
var hasFractionalPart = (myValue - Math.Round(myValue) != 0)
is much easier than decoding the binary structure. This should work for decimals as well as classic floating-point data types such as float or double. In the latter case, due to floating-point rounding error, it might make sense to check for Math.Abs(myValue - Math.Round(myValue)) < someThreshold instead of comparing to 0.
If you want a reasonably efficient way of getting the 'decimal' value of a decimal type you can just mod it by one.
decimal number = 4.75M;
decimal fractionalPart = number % 1;
Console.WriteLine(fractionalPart); //will print 0.75
While it may not be the theoretically optimal solution, it'll be quite fast, and almost certainly fast enough for your purposes (far better than string manipulation and parsing, which is a common naive approach).
You can use Decimal.GetBits in order to retrieve the bits from a decimal structure.
The MSDN page linked above details how they are laid out in memory:
The binary representation of a Decimal number consists of a 1-bit sign, a 96-bit integer number, and a scaling factor used to divide the integer number and specify what portion of it is a decimal fraction. The scaling factor is implicitly the number 10, raised to an exponent ranging from 0 to 28.
The return value is a four-element array of 32-bit signed integers.
The first, second, and third elements of the returned array contain the low, middle, and high 32 bits of the 96-bit integer number.
The fourth element of the returned array contains the scale factor and sign. It consists of the following parts:
Bits 0 to 15, the lower word, are unused and must be zero.
Bits 16 to 23 must contain an exponent between 0 and 28, which indicates the power of 10 to divide the integer number.
Bits 24 to 30 are unused and must be zero.
Bit 31 contains the sign; 0 meaning positive, and 1 meaning negative.
Going with Oded's detailed info to use GetBits, I came up with this
const int EXP_MASK = 0x00FF0000;
bool hasDecimal = (Decimal.GetBits(value)[3] & EXP_MASK) != 0x0;