When creating a program that takes a number (like 1253) and turns it into 125^3, I get an odd error where converting a string won't seem to work. Here is my code:
string example = "1253";
// grab all but the last character
int num = Convert.ToInt32(example.Substring(0, example.Length - 1));
Console.WriteLine(num);
// grab the last character
//int pow = Convert.ToInt32(example.Substring(example.Length - 1));
int pow = Convert.ToInt32(example[example.Length - 1]);
Console.WriteLine(pow);
// output num to the power of pow
Console.WriteLine(Math.Pow(num, pow));
Console.ReadKey();
The first initialization of the variable pow works correctly but the second one (not commented out) does not for some reason. The different ways of grabbing the last character of a string work, but for some reason with the first "3" will convert to 3, but for the latter "3" will convert to 51.
Here is the output when using the commented initialization of pow:
125
3
1953125
Here is the output when using the uncommented initialization of pow:
125
51
8.75811540203011E+106
I'm fairly new to C# so any help would be much appreciated. Thank you!
When you use an indexer on a string: example[example.Length - 1] you are returned a char of value '3' (not a string "3").
This means a different overload of Convert.ToInt32 is called with a char as the parameter. The conversion applied to a char is completely different to that which is applied to a string.
char : Converts the value of the specified Unicode character to the equivalent 32-bit signed integer.
as opposed to
string: Converts the specified string representation of a number to an equivalent 32-bit signed integer.
If you take a peek at a Unicode table, you'll see that '3' has a value of hex 33, or 51.
You might have better luck with example[example.Length - 1].ToString().
When you pass a character into Convert.ToInt32() it will convert it into its ASCII value. That's why your 3 is becoming 51.
Related
so im taking number input and the im trying to add each digit to an array of int without using any loop
here i got an answer
int[] fNum = Array.ConvertAll(num.ToString().ToArray(),x=>(int)x - 48);
I understand until .toarray(), but I do not understand why it takes a new variable x and the => (int)x - 48.
Could anyone explain this to me?
Because the asci value of 0 is 48 and for 1 it is 49. so to get the char value 1 you need to do 49 - 48 which is equal to 1 and similarly for other numbers.
you should also look in to the documentation of Array.ConvertAll.
It clearly explains the second parameter,
A Converter<TInput,TOutput> that converts each element from one type
to another type.
You can also refer to this declaration in the Array class.
Also, have a look to understand lambda operator and the official documentation.
without using any loop
Well, I might have a surpise for you.
a new variable x
ConvertAll is actually a loop under the hood. It iterates through the collection. x represents an item in the collection.
x=>(int)x - 48
For each item x in the collection, cast it to an int and subtract 48.
This syntax is a lambda expression.
num.ToString().ToArray(),x=>(int)x - 48
This code is process of dividing a string filled with numbers into an array of characters, converting CHAR-type characters into ASCII values, and converting them into Int values.
The letter '5' of the CHAR type is an ASCII value of 53 and must be -48 to convert it to a INT type value 5.
I have a string that contains numbers, like so:
string keyCode = "1200009990000000000990";
Now I want to get the number on position 2 as integer, which I did like so:
int number = Convert.ToInt32(keyCode[1]);
But instead of getting 2, I get 50.
What am I doing wrong?
50 is the ascii code for char '2'. 0->48, 1->49 etc....
You can do
int number = keyCode[1]-'0';
You observed that when you do int n = Convert.ToInt32( '2' ); you get 50. That's correct.
Apparently, you did not expect to get 50, you expected to get 2. That's what's not correct to expect.
It is not correct to expect that Convert.ToInt32( '2' ) will give you 2, because then what would you expect if you did Convert.ToInt32( 'A' ) ?
A character is not a number. But of course, inside the computer, everything is represented as a number. So, there is a mapping that tells us what number to use to represent each character. Unicode is such a mapping, ASCII is another mapping that you may have heard of. These mappings stipulate that the character '2' corresponds to the number 50, just as they stipulate that the character 'A' corresponds to the number 65.
Convert.ToInt32( char c ) performs a very rudimentary conversion, it essentially reinterprets the character as a number, so it allows you to see what number the character corresponds to. But if from '2' you want to get 2, that's not the conversion you want.
Instead, you want a more complex conversion, which is the following: int n = Int32.Parse( keyCode.SubString( 1, 1 ) );
Well, You got 50 because it is the ascii code of 2.
You are getting it because you are pointing to a char and when c# converts a char to an int it gives back its ascii code. You should use instead int.Parse which takes a string.
int.Parse(keyCode[1].ToString());
or
int.Parse(keyCode.Substring(1,1));
You need
int number = (int)Char.GetNumericValue(keyCode[1]);
The cast is needed because Char.GetNumericValue returns a double.
As Ofir said, another method is int number = int.Parse(keyCode[1].ToString()).
This command can be explained like this: int is shorthand for Int32, which contains a Parse(string) method. Parse only works if the input is a string that contains only numbers, so it's not always the best to use unless TryParse (the method that checks whether you can parse a string or not) has been invoked and returns true, but in this case, since your input string is always a number, you can use Parse without using TryParse first. keyCode[1] actually implicitly converts keyCode to a char[] first so that it can retrieve a specific index, which is why you need to invoke ToString() on it before you can parse it.
This is my personal favorite way to convert a string or char to an int, since it's pretty easy to make sense of once you understand the conversions that are performed both explicitly and implicitly. If the string to convert isn't static, it may be better to use a different method, since an if statement checking whether it can be parsed or a try makes it take longer to code and execute than one of the other solutions.
I m using random generator what takes the length of random bytes as an input and returns byte array. What i need now is to convert that byte array to 8 digit integer and from that to a string.
byte[] randomData = this.GetRandomArray(4);
SecretCode = Math.Abs(BitConverter.ToInt32(randomData, 0)).ToString().Substring(0, 7);
But some occasions the int is shorter than 8 digits and my method fails. How can i make sure that the byte array generated can be converted to 8 digi int number?
One more option:
myString = BitConverter.ToUInt32(randomData, 0).toString("D8");
Note - using ToUInt32 is a more sensible approach than converting to signed integer and taking the absolute value (it also doubles the number of values you can generate since -123 and 123 will result in a different string output, which they won't if you use Math.Abs.); and the format "D8" should convert to eight digits including leading zeros.
See https://stackoverflow.com/a/5418425/1967396
You could just use <stringCode>.PadLeft(8, "0")
Are you sure that your method is failing on the Substring? As far as I can see there's a number of issues:
It'll fail if you don't get 4 bytes back (ArgumentException on BitConverter.ToInt32)
It'll fail if the string isn't long enough (your problem from above)
It'll truncate at seven chars, not eight, as you want.
You can use the PadLeft function to pad with zeros. If want eight then code should look like:
var s = Math.Abs(
BitConverter.ToInt32(randomData, 0))
.ToString()
.PadLeft(8, '0')
.Substring(0, 8);
For seven, replace the 8 with a 7.
You need to concatenate eight zeros before trying to take the Substring(), then take the last 8 characters.
StringBuffer s = new StringBuffer("00000000").append(Math.Abs(BitConverter.ToInt32(randomData, 0)).ToString());
SecretCode = s.substring(s.length()-7);
Your other option is to use a formatter to ensure the stringification of the bits returns leading zeros.
I'm trying to write the largest int64 value to the command line. I tried using 0x1111111111111111 which is 16 ones, and visual studio says that is int64. I would have assumed that would be int16. What am missing here?
0x is the prefix for hexadecimal and not binary literals. This means that the binary representation of your number is 0001000100010001000100010001000100010001000100010001000100010001
There are unfortunately no binary literals in C#, so you either have to do the calculation yourself (0x7FFFFFFFFFFFFFFF) or use the Convert class, for example:
short s = Convert.ToInt16("1111111111111111", 2); // "2" for binary
In order to just get the largest Int64 number, you don't need to perform any calculations of your own, as it is already available for you in this field:
Int64.MaxValue
The literal 0x1111111111111111 is a hexadecimal number. Each hexadecimal digit can be represented using four bits so with 16 hexadecimal digits you need 4*16 = 64 bits. You probably intended to write the binary number 1111111111111111. You can convert from a binary literal string to an integer using the following code:
Convert.ToInt16("1111111111111111", 2)
This will return the desired number (-1).
To get the largest Int64 you can use Int64.MaxValue (0x7FFFFFFFFFFFFFFF) or if you really want the unsigned value you can use UInt64.MaxValue (0xFFFFFFFFFFFFFFFF).
The largest Int64 value is Int64.MaxValue. To print this in hex, try:
Console.WriteLine(Int64.MaxValue.ToString("X"));
We've to implement an encryption for an external interface. The owner of the interface has given documentation of how to preform the same encryption on our side. However, this documentation is in C# and we work in PHP.
Most of the parts we understand except for where they seem to typecast a hash to an int. Their code reads:
// hashString exists and is a md5 a like string
int[] keyBuffer = new int[hashString.length];
for (int i=0; i<hashString.length; i++) {
keyBuffer[i] = (int)hashString[i];
}
In PHP, when casting a letter as int, you get 0 (int). As we can't imagine this is what the third party means, we believe C# does something else.
Does C# also cast to int 0, or possibly to a char?
Second, the original hashString is 320 long. This means the code will be creating an int which is 320 long?? In PHP you don't have this idea of reserving memory as C# does here. But when we try to typecast a 320 long string to an int we get an int which is 19 'chars' long.
Does C# also create a shorter int when typecasting a really long 'number' in a string?
You're converting a char to int. A char is a UTF-16 code unit - an unsigned 16-bit integer (the range is [0, 65535]). You get that value, basically, widened to a 32-bit signed integer. So 'A' ends up as 65, for example, and the Euro symbol (U+20AC) ends up as 8364 (0x20ac).
As for your second part - you're creating an int, you're creating an int array. An yes, you'll be creating an array with 320 elements.
C# strings are UTF16. When you cast a UTF16 character to an int, it merely copies the 16-bit UTF16 character value into the 32-bit int.
C# can cast a character to an int and will give you the character code.The code above is taking a string, hashString, and turning it into an array of integers, keybuffer. C# is capable of treating a string like an array of chars using the indexer [] syntax. The code above will produce an array of ints, one per character in the hash string, and each int will be the character code of the corresponding character.
To expand on Jon Skeet's post, your "decimal" integer values will map to the corresponding char values like in the chart below (which I have had on my development PCs for years).
So, casting the integer value 0 to a char will return a NULL.
EDIT: Looking at your original question, it is possible you would be better served looking at an MD5 Example instead of casting the string to an array of integers.
The code actually cast the char (normally ASCII) into an int, not '0' to 0. So if the original string is "d131dd02c5e6eec4", the resulting array will be int[]{100, 49, 51, 49, 100, 100, 48, 50, 99, 53, 101, 54, 101, 101, 99, 52}.
So I imagine you need the function ord in your PHP script.
EDIT:
A bit remarks, casting a string to int in PHP may actually phrase it into int, and the largest int PHP handles is either 32-bit or 64-bit depending on the OS, that's why you get a 19-char long int, which is the maximum of 64-bit int.
In C#, there is another variable type called char, which represents one unicode character, and can cast directly into integer. You cannot cast a string in C# into an int directly in C#.
EDIT2:
I imagine your PHP script to look like this:
<?php
$keyBuffer = new array();
for ($i=0; $i<strlen($hashString); $i++) {
$keyBuffer[$i] = ord($hashString[i]);
}
?>