I am trying to learn java from a C# background! I have found some nuances during my journey like C# doesn't have an Integer as reference type it only has int as a primitive type; this lead me to a doubt if this translation would be correct!
String line ="Numeric string";//Java
string line = "Numeric string";//C#
int size;
size = Integer.valueOf(line, 16).intValue(); //In Java
size = int.Parse(line,System.Globalization.NumberStyles.Integer);//In C#
No, that's not quite a valid translation - because the 16 in the Java code means it should be parsed as an integer. The Java code is closer to:
int size = Convert.ToInt32(line, 16);
That overload of Convert.ToInt32 allows you to specify the base.
Alternatively you could use:
int size = int.Parse("11", NumberStyles.AllowHexSpecifier);
I'm not keen on the name "allow hex specifier" as it suggests that a prefix of "0x" will be accepted... but in reality it means it's always interpreted as hex.
In java, this is better:
Integer.parseInt(line, 16);
it parse a string into int, while Integer.valueOf converts to Integer.
Related
I want to declare -1 literal using the new binary literal feature:
int x = 0b1111_1111_1111_1111_1111_1111_1111_1111;
Console.WriteLine(x);
However, this doesn't work because C# considers this as a uint literal and we get Cannot implicitly convert type 'uint' to 'int'... which is a bit strange for me since we deal with binary data.
Is there a way to declare -1 integer value using binary literal in C#?
After trying some cases, I finally found out this one
int x = -0b000_0000_0000_0000_0000_0000_0000_0001;
Console.WriteLine(x);
And as result is printed -1.
If I understand everything correct they use sing flag for -/+ so when you put 32 1 you go into uint
You can explicitly cast it, but because there's a constant term involved, I believe you have to manually specify unchecked:
int x = unchecked((int)0b1111_1111_1111_1111_1111_1111_1111_1111);
(Edited to include Jeff Mercado's suggestion.)
You can also use something like int x = -0b1 as pointed out in S.Petrosov's answer, but of course that doesn't show the actual bit representation of -1, which might defeat the purpose of declaring it using a binary literal in the first place.
int foo = 510;
int bar = 0x0001FE;
Are there compiling differences between these two declarations?
For example, does your computer read these two values different?
No, but Hex is clerer to read as human if you look for bit / byte values.
Internally all is binary and the compiler does convert it to binary representation anyway.
I'm getting the wrong number when converting bits to float in C#.
Let's use this bit number= 1065324597
In Java, if I want to convert from bits to float I would use intBitsToFloat method
int intbits= 1065324597;
System.out.println(Float.intBitsToFloat(intbits));
Output: 0.9982942 which the correct output the I want to get in C#
However, in C# I used
int intbits= 1065324597;
Console.WriteLine((float)intbits);
Output: 1.065325E+09 Wrong!!
My question is how would you convert inbitsToFloat in C#?
My attempt:
I looked to the documentation here http://msdn.microsoft.com/en-us/library/aa987800(v=vs.80).aspx
but I still have the same trouble
Just casting is an entirely different operation. You need BitConverter.ToSingle(byte[], int) having converted the int to a byte array - and possibly reversed the order, based on the endianness you want. (EDIT: Probably no need for this, as the same endianness is used for both conversions; any unwanted endianness will just fix itself.) There's BitConverter.DoubleToInt64Bits for double, but no direct float equivalent.
Sample code:
int x = 1065324597;
byte[] bytes = BitConverter.GetBytes(x);
float f = BitConverter.ToSingle(bytes, 0);
Console.WriteLine(f);
i want to add on top of what jon skeet said, that also, for big float, if you don't want the "E+" output you should do:
intbits.ToString("N0");
Just try this...
var myBytes = BitConverter.GetBytes(1065324597);
var mySingle = BitConverter.ToSingle(myBytes,0);
The BitConverter.GetBytes converts your integer into a four byte array. Then BitConverter.ToSingle converts your array into a float(single).
We've to implement an encryption for an external interface. The owner of the interface has given documentation of how to preform the same encryption on our side. However, this documentation is in C# and we work in PHP.
Most of the parts we understand except for where they seem to typecast a hash to an int. Their code reads:
// hashString exists and is a md5 a like string
int[] keyBuffer = new int[hashString.length];
for (int i=0; i<hashString.length; i++) {
keyBuffer[i] = (int)hashString[i];
}
In PHP, when casting a letter as int, you get 0 (int). As we can't imagine this is what the third party means, we believe C# does something else.
Does C# also cast to int 0, or possibly to a char?
Second, the original hashString is 320 long. This means the code will be creating an int which is 320 long?? In PHP you don't have this idea of reserving memory as C# does here. But when we try to typecast a 320 long string to an int we get an int which is 19 'chars' long.
Does C# also create a shorter int when typecasting a really long 'number' in a string?
You're converting a char to int. A char is a UTF-16 code unit - an unsigned 16-bit integer (the range is [0, 65535]). You get that value, basically, widened to a 32-bit signed integer. So 'A' ends up as 65, for example, and the Euro symbol (U+20AC) ends up as 8364 (0x20ac).
As for your second part - you're creating an int, you're creating an int array. An yes, you'll be creating an array with 320 elements.
C# strings are UTF16. When you cast a UTF16 character to an int, it merely copies the 16-bit UTF16 character value into the 32-bit int.
C# can cast a character to an int and will give you the character code.The code above is taking a string, hashString, and turning it into an array of integers, keybuffer. C# is capable of treating a string like an array of chars using the indexer [] syntax. The code above will produce an array of ints, one per character in the hash string, and each int will be the character code of the corresponding character.
To expand on Jon Skeet's post, your "decimal" integer values will map to the corresponding char values like in the chart below (which I have had on my development PCs for years).
So, casting the integer value 0 to a char will return a NULL.
EDIT: Looking at your original question, it is possible you would be better served looking at an MD5 Example instead of casting the string to an array of integers.
The code actually cast the char (normally ASCII) into an int, not '0' to 0. So if the original string is "d131dd02c5e6eec4", the resulting array will be int[]{100, 49, 51, 49, 100, 100, 48, 50, 99, 53, 101, 54, 101, 101, 99, 52}.
So I imagine you need the function ord in your PHP script.
EDIT:
A bit remarks, casting a string to int in PHP may actually phrase it into int, and the largest int PHP handles is either 32-bit or 64-bit depending on the OS, that's why you get a 19-char long int, which is the maximum of 64-bit int.
In C#, there is another variable type called char, which represents one unicode character, and can cast directly into integer. You cannot cast a string in C# into an int directly in C#.
EDIT2:
I imagine your PHP script to look like this:
<?php
$keyBuffer = new array();
for ($i=0; $i<strlen($hashString); $i++) {
$keyBuffer[$i] = ord($hashString[i]);
}
?>
This is a follow up question. So, Java store's integers in two's-complements and you can do the following:
int ALPHA_MASK = 0xff000000;
In C# this requires the use of an unsigned integer, uint, because it interprets this to be 4278190080 instead of -16777216.
My question, how do declare negative values in hexadecimal notation in c#, and how exactly are integers represented internally? What are the differences to Java here?
C# (rather, .NET) also uses the two's complement, but it supports both signed and unsigned types (which Java doesn't). A bit mask is more naturally an unsigned thing - why should one bit be different than all the other bits?
In this specific case, it is safe to use an unchecked cast:
int ALPHA_MASK = unchecked((int)0xFF000000);
To "directly" represent this number as a signed value, you write
int ALPHA_MASK = -0x1000000; // == -16777216
Hexadecimal is not (or should not) be any different from decimal: to represent a negative number, you need to write a negative sign, followed by the digits representing the absolute value.
Well, you can use an unchecked block and a cast:
unchecked
{
int ALPHA_MASK = (int)0xff000000;
}
or
int ALPHA_MASK = unchecked((int)0xff000000);
Not terribly convenient, though... perhaps just use a literal integer?
And just to add insult to injury, this will work too:
-0x7F000000