I am trying to write a code that sums the digits of a number and it should work, but I can't find where I am doing it wrong, I have a working code in Python for this and I tried to do it the same way in C# but.. Here are the two codes
Python:
number = "12346546"
summ=0
for i in number:
summ+=int(i)
print summ
C#:
string num = "2342";
int sum = 0;
for (int i = 0; i < num.Length; i++)
{
int number = Convert.ToInt32(num[i]);
sum += number;
}
Console.WriteLine(sum);
Edit: I used the Debugger and I found that when I am converting the numbers they turn out to be completely different numbers, but if I convert the whole string then it is converting correctly... how to fix this?
num[i] is a char and Convert.ToInt32 will return the ASCII code of the char instead of the actual numerical value.Use:
int number = Convert.ToInt32(num[i].ToString());
Also change i < num.Length-1 to i < num.Length
Edit: To make it more clear here is an example:
int n1 = Convert.ToInt32('0'); // Uses Convert.ToInt32(char) result -> 48
int n2 = (int) '0'; // cast from char to int result -> 48
int n3 = Convert.ToInt32("0"); // Uses Convert.ToInt32(string) result -> 0
replace Convert.ToInt32(num[i])
by
Convert.ToInt32(num[i].ToString())
else, you will get an ascii value... (cause num[i] is a char)
see msdn
There is no null at the end of a C# string, so you don't have to worry about it, and thus do not require the "-1" in your loop. However, there is a much easier way:
string num = "2342";
int sum = num.ToCharArray().Select(i => int.Parse(i.ToString())).Sum();
This converts the string to a character array, converts them all to ints (returning an IEnumerable<int> in the process) and then returns the sum of them all.
Convert.ToInt32(num[i]); would give you the ASCII value for the character digit. For example for character 1 you will get 49.
Use char.GetNumericValue like:
for (int i = 0; i < num.Length; i++)
{
int number = (int) char.GetNumericValue((num[i]));
sum += number;
}
You also need to modify your loop to continue till Length, since you are using <.
If you want to use LINQ then you can do:
int sum = num.Select(r => (int)char.GetNumericValue(r)).Sum();
A one line solution using Linq:
string num = "2342";
int sum = num.Sum(c=> Convert.ToInt32(c.ToString()));
Here's a fiddle: https://dotnetfiddle.net/3jt7G6
You can combine a few things.
You can select a collection of chars from a string using Linq (Sum() is a Linq method).
You still need to convert those characters into numbers; you can do this by converting a character to a string and parsing that or you can use another built-in method.
var sum = num.Sum(i => Char.GetNumericValue(i));
char is, at its core, just a number. It just happens to be a number representing a character.
Here are some solutions that highlight this:
int sum = num.Sum(c => c) - '0' * num.Length;
// or
int sum = 0;
for (int i = 0; i < num.Length; i++)
{
sum += num[i] - '0';
}
Related
I've got an
int[,] map = int[100, 100];
and a
String mapString;
mapString simply contains a large amount of numbers (there are no other chars than numbers).
I now want to assign the first value in the array(map[0,0]) with the first char of mapString, the second value(map[0,1]) with the second char of mapString and so on. I use the following code:
int currentposition = 0;
for (int x = 0; x < 100; x++)
{
for (int y = 0; y < 100; y++)
{
map[x, y] = ArrayTest.Properties
.Settings
.Default
.mapSaveSetting
.ElementAt(currentposition);
currentposition++;
}
}
Now what happens is almost what I wished for.
The problem is that it assigns two numbers to each value instead of one. Also i can't figure out what numbers he's using as they're not the ones in my mapSaveSetting, but I can deal with that for myself.
The only problem I really don't get is that each value contains two numbers after executing this for-loop. Why does it happen? ElementAt(int) only returns one char, right?
It really looks like a logical mistake to me but I can't find it. Please don't be offensive if I just made a dumb mistake in my way of thinking.
EDIT
As it seems to be unclear what is the problem now, I'll add an example.
map[0, 0] == 42
...could be an output. Even if the String would start with e.g. 4245634 it would not make sense, as ElementAt(int) should only return the 4, not 42, right?
You are assigning char to int.Since there is implicit conversation from char to int you are getting the Unicode code of the character (in your case characters representing numbers).To fix you issue you should convert character to int.
In your case as the all characters are numeric you can do as trick like this:
map[x, y] =
ArrayTest.Properties.Settings.Default.mapSaveSetting.ElementAt(currentposition) - 48;
This work because Unicode codes of symbols [0..9] sequential and equals to [48..57].
I think that your mistake is related to the ASCII value of the characters. You should know that each character has a related ASCII value, in particular 0 has an ASCII value of 48, 1 of 49 and so on (you could check an ASCII table to check this out).
So, to get the right value of the character, you should subtract the value of the char 0 from the one in the string, like in the following piece of code.
map[x, y] = ArrayTest.Properties.Settings.Default.mapSaveSetting.ElementAt(currentposition) - '0';
You are assigning a char value to an int. A char designs a Unicode code point, and it converts implicitly to int but not in the way you expect: it gives you the code point.
Example:
Console.WriteLine((int)'A'); // Will print 65
You're actually trying to convert a single digit represented as a string to an int. Use int.Parse for this.
Console.WriteLine(int.Parse("5")); // Will print 5
Another issue: you shouldn't use ElementAt on a string, since it will needlessly iterate over the whole string until the specified index, as string doesn't implement IList<char>.
You could use the indexer like that:
int currentposition = 0;
var setting = ArrayTest.Properties.Settings.Default.mapSaveSetting;
for (int x = 0; x < 100; x++)
{
for (int y = 0; y < 100; y++)
{
map[x, y] = int.Parse(setting[currentposition].ToString());
currentposition++;
}
}
But it's actually a waste to convert each char in there to a new string, so just use some basic math instead:
int currentposition = 0;
var setting = ArrayTest.Properties.Settings.Default.mapSaveSetting;
for (int x = 0; x < 100; x++)
{
for (int y = 0; y < 100; y++)
{
map[x, y] = setting[currentposition++] - '0';
}
}
This works as the code points for the digits are consecutive.
To elaborate just a little on previous answers: In C and C++, a char is a byte (for all intents and purposes), while in C# it's a "Unicode" character. C# however has the 'byte' type matching pretty much 1:1 to the lower-level languages 'char' type.
The question suggests this could be a part of the issue.
Additionally (performance consideration): It should be noted that in C#, an array is a "heavy" type, and a multi-dimensional array is really an array-of-arrays. Depending on usage patterns, it could be more efficient to use a single-dimension array and scale one of the indices by row/col-size manually. Something like:
type this[int x, int y] { get { /* scale one of x/y and read from 1-dimensional array */ } }
I'm trying to calculate
If we calculated every possible combination of numbers from 0 to (c-1)
with a length of x
what set would occur at point i
For example:
c = 4
x = 4
i = 3
Would yield:
[0000]
[0001]
[0002]
[0003] <- i
[0010]
....
[3333]
This is very nearly the same problem as in the related question Logic to select a specific set from Cartesian set. However, because x and i are large enough to require the use of BigInteger objects, the code has to be changed to return a List, and take an int, instead of a string array:
int PossibleNumbers;
public List<int> Get(BigInteger Address)
{
List<int> values = new List<int>();
BigInteger sizes = new BigInteger(1);
for (int j = 0; j < PixelArrayLength; j++)
{
BigInteger index = BigInteger.Divide(Address, sizes);
index = (index % PossibleNumbers);
values.Add((int)index);
sizes *= PossibleNumbers;
}
return values;
}
This seems to behave as I'd expect, however, when I start using values like this:
c = 66000
x = 950000
i = (66000^950000)/2
So here, I'm looking for the ith value in the cartesian set of 0 to (c-1) of length 950000, or put another way, the halfway point.
At this point, I just get a list of zeroes returned. How can I solve this problem?
Notes: It's quite a specific problem, and I apologise for the wall-of-text, I do hope it's not too much, I was just hoping to properly explain what I meant. Thanks to you all!
Edit: Here are some more examples: http://pastebin.com/zmSDQEGC
Here is a generic base converter... it takes a decimal for the base10 value to convert into your newBase and returns an array of int's. If you need a BigInteger this method works perfectly well with just changing the base10Value to BigInteger.
EDIT: Converted method to BigInteger since that's what you need.
EDIT 2: Thanks phoog for pointing out BigInteger is base2 so changing the method signature.
public static int[] ConvertToBase(BigInteger value, int newBase, int length)
{
var result = new Stack<int>();
while (value > 0)
{
result.Push((int)(value % newBase));
if (value < newBase)
value = 0;
else
value = value / newBase;
}
for (var i = result.Count; i < length; i++)
result.Push(0);
return result.ToArray();
}
usage...
int[] a = ConvertToBase(13, 4, 4) = [0,0,3,1]
int[] b = ConvertToBase(0, 4, 4) = [0,0,3,1]
int[] c = ConvertToBase(1234, 12, 4) = [0,8,6,10]
However the probelm you specifically state is a bit large to test it on. :)
Just calculating 66000 ^ 950000 / 2 is a good bit of work as Phoog mentioned. Unless of course you meant ^ to be the XOR operator. In which case it's quite fast.
EDIT: From the comments... The largest base10 number that can be represented given a particular newBase and length is...
var largestBase10 = BigInteger.Pow(newBase, length)-1;
The first expression of the problem boils down to "write 3 as a 4-digit base-4 number". So, if the problem is "write i as an x-digit base-c number", or, in this case, "write (66000^950000)/2 as a 950000-digit base 66000 number", then does that make it easier?
If you're specifically looking for the halfway point of the cartesian product, it's not so hard. If you assume that c is even, then the most significant digit is c / 2, and the rest of the digits are zero. If your return value is all zeros, then you may have an off-by-one error, or the like, since actually only one digit is incorrect.
i want to pick the integer combinations using a single integer,
like I have a number 1234.. Now what i want is: 1,2,3,4,12,23,34,123,234,1234
Kindly help..??
If I understood you right, you want all substrings of a given string (in this case the number 1234). So, for a string of length n there are n substrings of length 1, n − 1 substrings of length 2, etc. until one substring of length n.
Given that you can easily solve this with two nested loops, e.g.:
public static IEnumerable<int> Foo(int x) {
string s = x.ToString();
for (int length = 1; length <= s.Length; length++) {
for (int i = 0; i + length < s.Length; i++) {
yield return int.Parse(s.Substring(i, length));
}
}
}
(Untested and there are likely fencepost errors, but you get the idea.)
How about this article on Permutations, Combinations, and Variations using C# Generics
where permutations and combination are discussed, with code.
If I want to generate an array that goes from 1 to 6 and increments by .01, what is the most efficient way to do this?
What I want is an array, with mins and maxs subject to change later...like this: x[1,1.01,1.02,1.03...]
Assuming a start, end and an increment value, you can abstract this further:
Enumerable
.Repeat(start, (int)((end - start) / increment) + 1)
.Select((tr, ti) => tr + (increment * ti))
.ToList()
Let's break it down:
Enumerable.Repeat takes a starting number, repeats for a given number of elements, and returns an enumerable (a collection). In this case, we start with the start element, find the difference between start and end and divide it by the increment (this gives us the number of increments between start and end) and add one to include the original number. This should give us the number of elements to use. Just be warned that since the increment is a decimal/double, there might be rounding errors when you cast to an int.
Select transforms all elements of an enumerable given a specific selector function. In this case, we're taking the number that was generated and the index, and adding the original number with the index multiplied by the increment.
Finally, the call to ToList will save the collection into memory.
If you find yourself using this often, then you can create a method to do this for you:
public static List<decimal> RangeIncrement(decimal start, decimal end, decimal increment)
{
return Enumerable
.Repeat(start, (int)((end - start) / increment) + 1)
.Select((tr, ti) => tr + (increment * ti))
.ToList()
}
Edit: Changed to using Repeat, so that non-whole number values will still be maintained. Also, there's no error checking being done here, so you should make sure to check that increment is not 0 and that start < end * sign(increment). The reason for multiplying end by the sign of increment is that if you're incrementing by a negative number, end should be before start.
The easiest way is to use Enumerable.Range:
double[] result = Enumerable.Range(100, 500)
.Select(i => (double)i/100)
.ToArray();
(hence efficient in terms of readability and lines of code)
I would just make a simple function.
public IEnumerable<decimal> GetValues(decimal start, decimal end, decimal increment)
{
for (decimal i = start; i <= end; i += increment)
yield return i;
}
Then you can turn that into an array, query it, or do whatever you want with it.
decimal[] result1 = GetValues(1.0m, 6.0m, .01m).ToArray();
List<decimal> result2 = GetValues(1.0m, 6.0m, .01m).ToList();
List<decimal> result3 = GetValues(1.0m, 6.0m, .01m).Where(d => d > 3 && d < 4).ToList();
Use a for loop with 0.01 increments:
List<decimal> myList = new List<decimal>();
for (decimal i = 1; i <= 6; i+=0.01)
{
myList.Add(i);
}
Elegant
double[] v = Enumerable.Range(1, 600).Select(x => x * 0.01).ToArray();
Efficient
Use for loop
Whatever you do, don't use a floating point datatype (like double), they don't work for things like this on behalf of rounding behaviour. Go for either a decimal, or integers with a factor. For the latter:
Decimal[] decs = new Decimal[500];
for (int i = 0; i < 500; i++){
decs[i] = (new Decimal(i) / 100)+1 ;
}
You could solve it like this. The solution method returns a double array
double[] Solution(double min, int length, double increment)
{
double[] arr = new double[length];
double value = min;
arr[0] = value;
for (int i = 1; i<length; i++)
{
value += increment;
arr[i] = value;
}
return arr;
}
var ia = new float[500]; //guesstimate
var x = 0;
for(float i =1; i <6.01; i+= 0.01){
ia[x] = i;
x++;
}
You could multi-thread this for speed, but it's probably not worth the overhead unless you plan on running this on a really really slow processor.
I'm currently making a game but I seem to have problems reading values from a text file. For some reason, when I read the value, it gives me the ASCII code of the value rather than the actual value itself when I wrote it to the file. I've tried about every ASCII conversion function and string conversion function, but I just can't seem to figure it out.
I use a 2D array of integers. I use a nested for loop to write each element into the file. I've looked at the file and the values are correct, but I don't understand why it's returning the ASCII code. Here's the code I'm using to write and read to file:
Writing to file:
for (int i = 0; i < level.MaxRows(); i++)
{
for (int j = 0; j < level.MaxCols(); j++)
{
fileWrite.Write(level.GetValueAtIndex(i, j) + " ");
//Console.WriteLine(level.GetValueAtIndex(i, j));
}
//add new line
fileWrite.WriteLine();
}
And here's the code where I read the values from the file:
string str = "";
int iter = 0; //used to iterate in each column of array
for (int i = 0; i < level.MaxRows(); i++)
{
iter = 0;
//TODO: For some reason, the file is returning ASCII code, convert to int
//keep reading characters until a space is reached.
str = fileRead.ReadLine();
//take the above string and extract the values from it.
//Place each value in the level.
foreach (char id in str)
{
if (id != ' ')
{
//convert id to an int
num = (int)id;
level.ChangeTile(i, iter, num);
iter++;
}
}
This is the latest version of the loop that I use to read the values. Reading other values is fine; it's just when I get to the array, things go wrong. I guess my question is, why did the conversion to ASCII happen? If I can figure that out, then I might be able to solve the issue. I'm using XNA 4 to make my game.
This is where the convertion to ascii is happening:
fileWrite.Write(level.GetValueAtIndex(i, j) + " ");
The + operator implicitly converts the integer returned by GetValueAtIndex into a string, because you are adding it to a string (really, what did you expect to happen?)
Furthermore, the ReadLine method returns a String, so I am not sure why you'd expect a numeric value to magically come back here. If you want to write binary data, look into BinaryWriter
This is where you are converting the characters to character codes:
num = (int)id;
The id variable is a char, and casting that to int gives you the character code, not the numeric value.
Also, this converts a single character, not a whole number. If you for example have "12 34 56 " in your text file, it will get the codes for 1, 2, 3, 4, 5 and 6, not 12, 34 and 56.
You would want to split the line on spaces, and parse each substring:
foreach (string id in str.Split(' ')) {
if (id.Length > 0) {
num = Int32.Parse(id);
level.ChangeTile(i, iter, num);
iter++;
}
}
Update: I've kept the old code (below) with the assumption that one record was on each line, but I've also added a different way of doing it that should work with multiple integers on a line, separated by a space.
Multiple records on one line
str = fileRead.ReadLine();
string[] values = str.Split(new Char[] {' '});
foreach (string value in values)
{
int testNum;
if (Int32.TryParse(str, out testnum))
{
// again, not sure how you're using iter here
level.ChangeTile(i, iter, num);
}
}
One record per line
str = fileRead.ReadLine();
int testNum;
if (Int32.TryParse(str, out testnum))
{
// however, I'm not sure how you're using iter here; if it's related to
// parsing the string, you'll probably need to do something else
level.ChangeTile(i, iter, num);
}
Please note that the above should work if you write out each integer line-by-line (i.e. how you were doing it via the WriteLine which you remarked out in your code above). If you switch back to using a WriteLine, this should work.
You have:
foreach (char id in str)
{
//convert id to an int
num = (int)id;
A char is an ASCII code (or can be considered as such; technically it is a unicode code-point, but that is broadly comparable assuming you are writing ANSI or low-value UTF-8).
What you want is:
num = (int)(id - '0');
This:
fileWrite.Write(level.GetValueAtIndex(i, j) + " ");
converts the int returned from level.GetValueAtIndex(i, j) into a string. Assuming the function returns the value 5 for a particular i and j then you write "5 " into the file.
When you then read it is being read as a string which consists of chars and you get the ASCII code of 5 when you cast it simply to an int. What you need is:
foreach (char id in str)
{
if (id != ' ')
{
//convert id to an int
num = (int)(id - '0'); // subtract the ASCII value for 0 from your current id
level.ChangeTile(i, iter, num);
iter++;
}
}
However this only works if you only ever are going to have single digit integers (only 0 - 9). This might be better:
foreach (var cell in fileRead.ReadLine().Split(' '))
{
num = Int.Parse(cell);
level.ChangeTile(i, iter, num);
iter++;
}