Trying to Assign a Large Number at Compile Time - c#

The following two operations are identical. However, MaxValues1 will not compile due to a "The operation overflows at compile time in checked mode." Can someone please explain what is going on with the compiler and how I can get around it without have to use a hard-coded value as in MaxValues2?
public const ulong MaxValues1 = 0xFFFF * 0xFFFF * 0xFFFF;
public const ulong MaxValues2 = 0xFFFD0002FFFF;

To get unsigned literals, add u suffix, and to make them long a l suffix. i.e. you need ul.
If you really want the overflow behavior, you can add unchecked to get unchecked(0xFFFF * 0xFFFF * 0xFFFF) but that's likely not what you want. You get the overflow because the literals get interpreted as Int32 and not as ulong, and 0xFFFF * 0xFFFF * 0xFFFF does not fit a 32 bit integer, since it is approximately 2^48.
public const ulong MaxValues1 = 0xFFFFul * 0xFFFFul * 0xFFFFul;

By default, integer literals are of type int. You can add the 'UL' suffix to change them to ulong literals.
public const ulong MaxValues1 = 0xFFFFUL * 0xFFFFUL * 0xFFFFUL;
public const ulong MaxValues2 = 0xFFFD0002FFFFUL;

Add numeric suffixes 'UL' to each of the numbers. Otherwise, C# considers them as Int32.
C# - Numeric Suffixes

I think its actually not a ulong until you assign it at the end, try
public const ulong MaxValues1 = (ulong)0xFFFF * (ulong)0xFFFF * (ulong)0xFFFF;
i.e. in MaxValues1 you are multiplying 3 32bit ints together which overflows as the result is implied as another 32bit int, when you cast the op changes to multiplying 3 ulongs together which wont overflow as you are inferring the result is a ulong
(ulong)0xFFFF * 0xFFFF * 0xFFFF;
0xFFFF * (ulong)0xFFFF * 0xFFFF;
also work as the result type is calculated based on the largest type
but
0xFFFF * 0xFFFF * (ulong)0xFFFF;
won't work as the first 2 will overflow the int

Related

representing a hexadecimal value by converting it to char

so I am outputting the char 0x11a1 by converting it to char
than I multiply 0x11a1 by itself and output it again but I do not get what I expect to get as
by doing this {int hgvvv = chch0;} and outputting to the console I can see that the computer thinks that 0x11a1 * 0x11a1 equals 51009 but it actually equals 20367169
As a result I do not gat what I want.
Could you please explain to me why?
char chch0 = (char)0x11a1;
Console.WriteLine(chch0);
chch0 = (char)(chch0 * chch0);
Console.WriteLine(chch0);
int hgvvv = chch0;
Console.WriteLine(hgvvv);
We know that 1 bytes is 8 bits.
We know that a char in c# is 2 bytes, which would be 16 bits.
If we multiply 0x11a1 X 0x11a1 we get 0x136c741.
0x136c741 in binary is 0001001101101100011101000001
Considering we only have 16 bits - we would only see the last 16 bits which is: 1100011101000001
1100011101000001 in hex is 0xc741.
This is 51009 that you are seeing.
You are being limited by the type size of char in c#.
Hope this answer cleared things up!
By enabling the checked context in your project or by adding it this way in your code:
checked {
char chch0 = (char)0x11a1;
Console.WriteLine(chch0);
chch0 = (char)(chch0 * chch0); // OverflowException
Console.WriteLine(chch0);
int hgvvv = chch0;
Console.WriteLine(hgvvv);
}
You will see that you will get an OverflowException, because the char type (2 bytes big) is only able to store values up to Char.MaxValue = 0xFFFF.
The value you expect (20367169) is larger than than 0xFFFF and you basically get only the two least significate bytes the type was able to store. Which is:
Console.WriteLine(20367169 & 0xFFFF);
// prints: 51009

Why can't I assign (1<<31) to an ulong var? (Error 25 Constant value ... cannot be converted ...)

Why does this assigning produce a comile error: Constant value '-2147483648' cannot be converted to a 'ulong' and I have to use unchecked (...) for this case?
ulong xDummy30 = (1 << 30); // works
ulong xDummy31 = (1 << 31); // ERROR 25 Constant value '-2147483648' cannot be converted to a 'ulong'
ulong xDummy32 = (1 << 32); // works
Using this instead works:
ulong xDummy31a = unchecked((ulong)(1 << 31));
// or
ulong xDummy31b = (1ul << 31); // comment from Regis Portalez
Edit
The Question Why do I have to cast 0 (zero) when doing bitwise operations in C#? has a similar answer and the reason for the observed behaviour is the same. But they are different questions.
According to MSDN ulong reference all your integer literals 1, 30, 31 are regarded as int:
When an integer literal has no suffix, its type is the first of these types in which its value can be represented: int, uint, long,
According to MSDN << operator the result of the << operation is also an int. When yo shift by 30 the result is positive, when shifting by 31 the result is a negative int which can't be assigned to an ulong.
Edit: HVD pointed me an error in the following. Thanks HVD!
Start Error - When shifting 32 bits, the compiler knows you want an ulong, and thus the result of the shift operation is a positive long, which can be converted to an unlong - end error
The correct reason why 1<<32 does not lead to compiler error is in the provided link to operator <<:
If the first operand is an int, the shift count is given by the
low-order five bits of the second operand. That is, the actual shift
count is 0 to 31 bits.
32 to binary: 0010 0000; low order five bits: 0 0000, So the actual performed shift is 1 << 0, which results to the int with the value 1, which of course can be assigned to an ulong.
To solve this, make sure that your number 1 is a long. In that case 1<<31 is still a positive long.
You can also use suffixes to specify the type of the literal according to the following rules:
If you use L, the type of the literal integer will be either long or ulong according to its size.
So 1L is a long; 1L <<31 is a positive long, and thus can be assigned to an ulong
var a = (1<<31);
Console.WriteLine(a);
ulong num = unchecked((ulong)(1<<31));
Console.WriteLine(num);
Output:
-2147483648
18446744071562067968
1<<31 value is unable to fit into uInt64
https://dotnetfiddle.net/Widget/TfjnSZ
As a supplement to my question and the accepted answer here is a note:
Quoting first the example of my question:
ulong xDummy30 = (1 << 30); // works
ulong xDummy31 = (1 << 31); // ERROR 25 Constant value '-2147483648' cannot be converted to a 'ulong'
ulong xDummy32 = (1 << 32); // works
It is correct that this assignement brings no compile error:
ulong xDummy32 = (1 << 32);
but it does not "work", contrary to what I wrote in my question. After this the result of xDummy32 is not 4294967296 but it is 1. Therefore bit shifts > 30 must be written this way:
ulong xDummy32 = (1UL << 32);

Convert 24 bit value to float and back

It is possible to convert 24 bit integer value into float and then back to 24 bit integer without losing data?
For example, let's consider 8 bit int, a byte, range is [-127..127] (we drop -128).
public static float ToFloatSample (byte x) { return x / 127f; }
So, if x == -127, result will be -1, if x == 127, result will be 1. If x == 64, result will be ~0.5
public static int ToIntSample (float x) { return (int) (x * 127f); }
So now:
int x = some_number;
float f = ToFloatSample (x);
int y = ToIntSample (f);
Will always x == y ? Using 8 bit int yes, but what if I use 24 bit?
Having thought about your question, I now understand what you're asking.
I understand you have 24-bits which represent a real number n such that -1 <= n <= +1 and you want to load this into an instance of System.Single, and back again.
In C/C++ this is actually quite easy with the frexp and ldexp functions, documented here ( how can I extract the mantissa of a double ), but in .NET it's a more involved process.
The C# language specification (and thusly, .NET) states it uses IEEE-754 1989 format, which means you'll need to dump the bits into an integer type so you can perform the bitwise logic to extract the components. This question has already been asked here on SO except for System.Double instead of System.Single, but converting the answer to work with Single is a trivial exercise for the reader ( extracting mantissa and exponent from double in c# ).
In your case, you'd want to store your 24-bit mantissa value in the low-24 bits of an Int32 and then use the code in that linked question to load and extract it from a Single instance.
Every integer in the range [-16777216, 16777216] is exactly representable as an IEEE 754 32-bit binary floating point number. That includes both the unsigned and 2's complement 24 bit integer ranges. Simple casting will do the job.
The range is wider than you would expect because there is an extra significand bit that is not stored - it is a binary digit that is known not to be zero.

Unsigned shift right in C# Using Java semantics for negative numbers

I'm trying to port Java code to C# and I'm running into odd bugs related to the unsigned shift right operator >>> normally the code:
long l = (long) ((ulong) number) >> 2;
Would be the equivalent of Java's:
long l = number >>> 2;
However for the case of -2147483648L which you might recognized as Integer.MIN_VALUE this returns a different number than it would in Java since the cast to ulong changes the semantics of the number hence I get a different result.
How would something like this be possible in C#?
I'd like to preserve the code semantics as much as possible since its a pretty complex body of code.
I believe your expression is incorrect when considering C#'s order precedence. Your code I believe is converting your long to ulong, then back to long, then shifting. I'm assuming your intent was to perform the shift on the ulong.
From the C# Specification ยง7.2.1, Unary (or in your case, the casting operation) takes precedence over the shifting. Thus your code:
long l = (long) ((ulong) number) >> 2;
would be interpreted as:
ulong ulongNumber = (ulong)number;
long longNumber = (long)ulongNumber;
long shiftedlongNumber = longNumber >> 2;
Given number as -2147483648L, this yields 536870912.
By wrapping the conversion and shifting in parenthesis:
long l = (long) (((ulong) number) >> 2);
Produces logic that could be rewritten as:
ulong ulongNumber = (ulong)number;
ulong shiftedulongNumber = ulongNumber >> 2;
long longShiftedNumber = (long)shiftedulongNumber;
Which given number as -2147483648L, this yields 4611686017890516992.
EDIT: Note that given those ordering rules, there's an extra set of parenthesis in my answer that aren't necessary. The correct expression could be written as:
long l = (long) ((ulong) number >> 2);

converting hex number to signed short

I'm trying to convert a string that includes a hex value into its equivalent signed short in C#
for example:
the equivalent hex number of -1 is 0xFFFF (in two bytes)
I want to do the inverse, i.e I want to convert 0xFFFF into -1
I'm using
string x = "FF";
short y = Convert.ToInt16(x,16);
but the output y is 255 instead of -1, I need the signed number equivalent
can anyone help me?
thanks
When your input is "FF" you have the string representation in hex of a single byte.
If you try to assign it to a short (two bytes), the last bit is not considered for applying the sign to the converted number and thus you get the 255 value.
Instead a string representation of "FFFF" represents two bytes where the last bit is set to 1 so the result, if assigned to a signed type like Int16, is negative while, if assigned to an unsigned type like ushort, is 65535-
string number = "0xFFFF";
short n = Convert.ToInt16(number, 16);
ushort u = Convert.ToUInt16(number, 16);
Console.WriteLine(n);
Console.WriteLine(u);
number = "0xFF";
byte b = Convert.ToByte(number, 16);
short x = Convert.ToInt16(number, 16);
ushort z = Convert.ToUInt16(number, 16);
Console.WriteLine(n);
Console.WriteLine(x);
Console.WriteLine(z);
Output:
-1
65535
-1
255
255
You're looking to convert the string representation of a signed byte, not short.
You should use Convert.ToSByte(string) instead.
A simple unit test to demonstrate
[Test]
public void MyTest()
{
short myValue = Convert.ToSByte("FF", 16);
Assert.AreEqual(-1, myValue);
}
Please see http://msdn.microsoft.com/en-us/library/bb311038.aspx for full details on converting between hex strings and numeric values.

Categories

Resources