Octal equivalent in C# - c#

In C language octal number can be written by placing 0 before number e.g.
int i = 012; // Equals 10 in decimal.
I found the equivalent of hexadecimal in C# by placing 0x before number e.g.
int i = 0xA; // Equals 10 in decimal.
Now my question is:
Is there any equivalent of octal number in C# to represent any value as octal?

No, there are no octal number literals in C#.
For strings: Convert.ToInt32("12", 8) returns 10.

No there isn't, the language specification (ECMA-334) is quite specific.
4th edition, page 72
9.4.4.2 Integer literals
Integer literals are used to write values of types int, uint, long,
and ulong. Integer literals have two possible forms: decimal and
hexadecimal.
No octal form.

No, there are no octal literals in C#.
If necessary, you could pass a string and a base to Convert.ToInt32, but it's obviously nowhere near as nice as a literal:
int i = Convert.ToInt32("12", 8);

No, there are no octal numbers in C#.
Use public static int ToInt32(string value, int fromBase);
fromBase
Type: System.Int32
The base of the number in value, which must be 2, 8, 10, or 16.
MSDN

You can't use literals, but you can parse an octal number: Convert.ToInt32("12", 8).

Related

Inverting all 32 bits in a number using binary operators in c#

I am wondering how you take a number (for example 9), convert it to a 32 int (00000000000000000000000000001001), then invert or flip every bit (11111111111111111111111111110110) so that the zeroes become ones and the ones become zeroes.
I know how to do that by replacing the numbers in a string, but I need to know how to do that with binary operators on a binary number.
I think you have to use this operator, "~", but it just gives me a negative number when I use it on a value.
That is doing the correct functionality. The int data type within C# uses signed integers, so 11111111111111111111111111110110 is in fact a negative number.
As Marc pointed out, if you want to use unsigned values declare your number as a uint.
If you look at the decimal version of your number then its a negative number.
If you declare it as a unsigned int then its a positive one.
But this doesnt matter, binary it will always be 11111111111111111111111111110110.
Try this:
int number = 9;
Console.WriteLine(Convert.ToString(number, 2)); //Gives you 1001
number = ~number; //Invert all bits
Console.WriteLine(Convert.ToString(number, 2));
//Gives you your wanted result: 11111111111111111111111111110110

0x80000000 == 2147483648 in C# but not in VB.NET

In C#:
0x80000000==2147483648 //outputs True
In VB.NET:
&H80000000=2147483648 'outputs False
How is this possible?
This is related to the history behind the languages.
C# always supported unsigned integers. The value you use are too large for int so the compiler picks the next type that can correctly represent the value. Which is uint for both.
VB.NET didn't acquire unsigned integer support until version 8 (.NET 2.0). So traditionally, the compiler was forced to pick Long as the type for the 2147483648 literal. The rule was however different for the hexadecimal literal, it traditionally supported specifying the bit pattern of a negative value (see section 2.4.2 in the language spec). So &H80000000 is a literal of type Integer with the value -2147483648 and 2147483648 is a Long. Thus the mismatch.
If you think VB.NET is a quirky language then I'd invite you to read this post :)
The VB version should be:
&H80000000L=2147483648
Without the 'long' specifier ('L'), VB will try to interpret &H8000000 as an integer. If you force it to consider this as a long type, then you'll get the same result.
&H80000000UI will also work - actually this is the type (UInt32) that C# regards the literal as.
This happens because the type of the hexadecimal number is UInt32 in C# and Int32 in VB.NET.
The binary representation of the hexadecimal number is:
10000000000000000000000000000000
Both UInt32 and Int32 take 32 bits, but because Int32 is signed, the first bit is considered a sign to indicate whether the number is negative or not: 0 for positive, 1 for negative. To convert a negative binary number to decimal, do this:
Invert the bits. You get 01111111111111111111111111111111.
Convert this to decimal. You get 2147483647.
Add 1 to this number. You get 2147483648.
Make this negative. You get -2147483648, which is equal to &H80000000 in VB.NET.

Int64 seems too short in C#

I'm trying to write the largest int64 value to the command line. I tried using 0x1111111111111111 which is 16 ones, and visual studio says that is int64. I would have assumed that would be int16. What am missing here?
0x is the prefix for hexadecimal and not binary literals. This means that the binary representation of your number is 0001000100010001000100010001000100010001000100010001000100010001
There are unfortunately no binary literals in C#, so you either have to do the calculation yourself (0x7FFFFFFFFFFFFFFF) or use the Convert class, for example:
short s = Convert.ToInt16("1111111111111111", 2); // "2" for binary
In order to just get the largest Int64 number, you don't need to perform any calculations of your own, as it is already available for you in this field:
Int64.MaxValue
The literal 0x1111111111111111 is a hexadecimal number. Each hexadecimal digit can be represented using four bits so with 16 hexadecimal digits you need 4*16 = 64 bits. You probably intended to write the binary number 1111111111111111. You can convert from a binary literal string to an integer using the following code:
Convert.ToInt16("1111111111111111", 2)
This will return the desired number (-1).
To get the largest Int64 you can use Int64.MaxValue (0x7FFFFFFFFFFFFFFF) or if you really want the unsigned value you can use UInt64.MaxValue (0xFFFFFFFFFFFFFFFF).
The largest Int64 value is Int64.MaxValue. To print this in hex, try:
Console.WriteLine(Int64.MaxValue.ToString("X"));

What does C# do when typecasting a letter to an int?

We've to implement an encryption for an external interface. The owner of the interface has given documentation of how to preform the same encryption on our side. However, this documentation is in C# and we work in PHP.
Most of the parts we understand except for where they seem to typecast a hash to an int. Their code reads:
// hashString exists and is a md5 a like string
int[] keyBuffer = new int[hashString.length];
for (int i=0; i<hashString.length; i++) {
keyBuffer[i] = (int)hashString[i];
}
In PHP, when casting a letter as int, you get 0 (int). As we can't imagine this is what the third party means, we believe C# does something else.
Does C# also cast to int 0, or possibly to a char?
Second, the original hashString is 320 long. This means the code will be creating an int which is 320 long?? In PHP you don't have this idea of reserving memory as C# does here. But when we try to typecast a 320 long string to an int we get an int which is 19 'chars' long.
Does C# also create a shorter int when typecasting a really long 'number' in a string?
You're converting a char to int. A char is a UTF-16 code unit - an unsigned 16-bit integer (the range is [0, 65535]). You get that value, basically, widened to a 32-bit signed integer. So 'A' ends up as 65, for example, and the Euro symbol (U+20AC) ends up as 8364 (0x20ac).
As for your second part - you're creating an int, you're creating an int array. An yes, you'll be creating an array with 320 elements.
C# strings are UTF16. When you cast a UTF16 character to an int, it merely copies the 16-bit UTF16 character value into the 32-bit int.
C# can cast a character to an int and will give you the character code.The code above is taking a string, hashString, and turning it into an array of integers, keybuffer. C# is capable of treating a string like an array of chars using the indexer [] syntax. The code above will produce an array of ints, one per character in the hash string, and each int will be the character code of the corresponding character.
To expand on Jon Skeet's post, your "decimal" integer values will map to the corresponding char values like in the chart below (which I have had on my development PCs for years).
So, casting the integer value 0 to a char will return a NULL.
EDIT: Looking at your original question, it is possible you would be better served looking at an MD5 Example instead of casting the string to an array of integers.
The code actually cast the char (normally ASCII) into an int, not '0' to 0. So if the original string is "d131dd02c5e6eec4", the resulting array will be int[]{100, 49, 51, 49, 100, 100, 48, 50, 99, 53, 101, 54, 101, 101, 99, 52}.
So I imagine you need the function ord in your PHP script.
EDIT:
A bit remarks, casting a string to int in PHP may actually phrase it into int, and the largest int PHP handles is either 32-bit or 64-bit depending on the OS, that's why you get a 19-char long int, which is the maximum of 64-bit int.
In C#, there is another variable type called char, which represents one unicode character, and can cast directly into integer. You cannot cast a string in C# into an int directly in C#.
EDIT2:
I imagine your PHP script to look like this:
<?php
$keyBuffer = new array();
for ($i=0; $i<strlen($hashString); $i++) {
$keyBuffer[$i] = ord($hashString[i]);
}
?>

Hexadecimal notation and signed integers

This is a follow up question. So, Java store's integers in two's-complements and you can do the following:
int ALPHA_MASK = 0xff000000;
In C# this requires the use of an unsigned integer, uint, because it interprets this to be 4278190080 instead of -16777216.
My question, how do declare negative values in hexadecimal notation in c#, and how exactly are integers represented internally? What are the differences to Java here?
C# (rather, .NET) also uses the two's complement, but it supports both signed and unsigned types (which Java doesn't). A bit mask is more naturally an unsigned thing - why should one bit be different than all the other bits?
In this specific case, it is safe to use an unchecked cast:
int ALPHA_MASK = unchecked((int)0xFF000000);
To "directly" represent this number as a signed value, you write
int ALPHA_MASK = -0x1000000; // == -16777216
Hexadecimal is not (or should not) be any different from decimal: to represent a negative number, you need to write a negative sign, followed by the digits representing the absolute value.
Well, you can use an unchecked block and a cast:
unchecked
{
int ALPHA_MASK = (int)0xff000000;
}
or
int ALPHA_MASK = unchecked((int)0xff000000);
Not terribly convenient, though... perhaps just use a literal integer?
And just to add insult to injury, this will work too:
-0x7F000000

Categories

Resources