Basically i'm trying to multiply each element of the first array with each element of the second array and then store it all at the end as a total. I'm pretty new to coding so just trying to learn and this one really has me stuck. This is the example of what it should eventually do.
ExampleArray1 = 5,6,7,8
ExampleArray2 = 2,3,4
(5*2)+(5*3)+(5*4) + (6*2)+(6*3)+(6*4) + (7*2)+(7*3)+(7*4) + (8*2)+(8*3)+(8*4) = 234
My code
int[] firstArray = { 5, 6, 7, 8 };
int[] secondArray = { 2, 3, 4 };
int[] thirdArray = new int[firstArray.Length * secondArray.Length];
for (int i = 0; i < firstArray.Length; i++)
for (int j = 0; j < secondArray.Length; j++)
{
thirdArray[i * firstArray.Length + j] = firstArray[i] * secondArray[j];
Console.WriteLine(thirdArray[i * firstArray.Length + j]);
}
You dont need a third array, you can just sum the results
var total = 0;
for (int i = 0; i < firstArray.Length; i++)
for (int j = 0; j < secondArray.Length; j++)
{
total += (firstArray[i] * secondArray[j]);
}
Console.WriteLine(total);
However you forgot to minus one form the length, in the third array index.
i.e to get the index you need i * (firstArray.Length - 1) + j
int[] thirdArray = new int[firstArray.Length * secondArray.Length];
for (int i = 0; i < firstArray.Length; i++)
for (int j = 0; j < secondArray.Length; j++)
{
thirdArray[i * (firstArray.Length - 1) + j] = firstArray[i] * secondArray[j];
}
Console.WriteLine(thirdArray.Sum());
You can apply some basic algebra to simplify this:
var total = 0;
var array1Total = 0;
var array2Total = 0;
for (int i = 0; i < firstArray.Length; i++)
{
array1Total += firstArray[i];
}
for (int j = 0; j < secondArray.Length; j++)
{
array2Total += secondArray[j];
}
total = array1Total * array2Total;
Console.WriteLine(total);
The reason is that if you expand (x0+x1+x2+x3...)*(y0+y1+y2+...) then you can see that you will multiply x0 by each of the y, then multiply x1 by each of the y, and so on. So you'll get each of the elements in the first brackets multiplied by each of the elements in the second brackets.
While this may not seem much different it will be considerably better for large arrays. if the length of your arrays are m and n then by the nested loops method you'll have m*n loop iterations. With the above technique you will have m+n. For small values this isn't a big deal. If you have arrays of thousands of items then the above will be significantly faster.
Of course, there is an even easier way to do the above:
var total = firstArray.Sum()*secondArray.Sum();
Related
So I wrote bucket sort implementation (I use rnd.NextDouble() to fill my array, so all the elements are in the range between 0 and 1). And I have number of experiments (10 experiments for each array size) and different array sizes (I start with 1000 and then +1000, etc. till 300.000). And sometimes it works fine, but sometimes it gives me OutOfRange exception:
public static void BucketSortImplement(ref int countOfElements, ref float[] array)
{
List<float>[] buckets = new List<float>[countOfElements];
for (int i = 0; i < countOfElements; i++)
{
buckets[i] = new List<float>();
}
for (int i = 0; i < countOfElements; i++)
{
float indexOfElement = array[i] * countOfElements;
buckets[(int)indexOfElement].Add(array[i]); // right here
}
for (int i = 0; i < countOfElements; i++)
{
for (int j = 0; j < buckets[i].Count; j++)
{
float keyElement = buckets[i][j];
int k = j - 1;
while (k >= 0 && buckets[i][k] > keyElement)
{
buckets[i][k + 1] = buckets[i][k];
k -= 1;
}
buckets[i][k + 1] = keyElement;
}
}
int arrayIndex = 0;
for (int i = 0; i < countOfElements; i++)
{
for (int j = 0; j < buckets[i].Count; j++)
{
array[arrayIndex++] = buckets[i][j];
}
}
}
I am a bit confused, because the algorithm itself looks fine, that is I have to calculate array[i] * countOfElements to get the index where I can put my element. Could you please direct me?
If you array contain value '1' it will lead to OutOfRange, because if you have a size of the list equal to 3 (for example) then valid indexes will be in the range [0, 2].
So far I have,
int[,] array2d = new int[20, 20];
for (int i = 1; i < array2d.GetLength(0); i++)
{
for (int j = 1; j < array2d.GetLength(1); j++)
{
array2d[i, j] = i * j ;
Console.WriteLine(array2d[i, j]);
}
}
but this is skipping some quite a few numbers, I tried to fix it by checking if I is <= but that throws a IndexOutOfRangeException
Is there some point where I made a major error? or is it a simple one.
Arrays are zero-based by default; so your array is [0..19, 0..19]; however, you want a different range: [1..20, 1..20]. We should not mix them: either (better choice)
int[,] array2d = new int[20, 20];
// i, j - array indexes
for (int i = 0; i < array2d.GetLength(0); i++)
{
for (int j = 0; j < array2d.GetLength(1); j++)
{
// since i, j are array indexes we multiply (i + 1) * (j + 1)
array2d[i, j] = (i + 1) * (j + 1);
Console.Write($"{array2d[i, j],3} ");
}
Console.WriteLine();
}
Or
int[,] array2d = new int[20, 20];
// i, j are values to be multiplied
for (int i = 1; i <= array2d.GetLength(0); i++)
{
for (int j = 1; j <= array2d.GetLength(1); j++)
{
// since i, j are values we have to compute array's indexes: i - 1, j - 1
array2d[i - 1, j - 1] = i * j;
Console.Write($"{array2d[i - 1, j - 1],3} ");
}
Console.WriteLine();
}
int[,] array2d = new int[20, 20];
for (int i = 0; i < 20; i++)
{
for (int j = 0; j < 20; j++)
{
array2d[i, j] = (i+1) * (j+1) ;
Console.WriteLine(array2d[i, j]);
}
}
Your problem was that when you define array like new int[20] then it can be iterated from 0 to 19, when you try to acces its value at index 20 you get this exception because index 20 doesnt exist in this array.
When you call arrays .GetLength method it returns its count of fields which is 20, not its maximum index which is 19.
I've run into a stall trying to put together some code to average out 10x10 subarrays of a 2D multidimensional array.
Given a multidimensional array
var myArray = new byte[100, 100];
How should I go about creating 100 subarrays of 100 bytes (10x10) each.
Here are some examples of the value indexes the subarrays from the multidimensional would contain.
[x1,y1,x2,y2]
Subarray1[0,0][9,9]
Subarray2[10,10][19,19]
Subarray3[20,20][29,29]
Given these subarrays, I would then need to average the subarray values to create a byte[10,10] from the original byte[100,100].
I realize this is not unbelievably difficult, but after spending 4 days debugging very low-level code and now getting stuck on this would appreciate some fresh eyes.
Use this as a reference. I used ints just for ease of use. Code is untested. but the idea is there.
var rowSize = 100;
var colSize = 100;
var arr = new int[rowSize, colSize];
var r = new Random();
for (int i = 0; i < rowSize; i++)
for (int j = 0; j < colSize; j++)
arr[i, j] = r.Next(20);
for (var subcol = 0; subcol < colSize / 10; subcol++)
{
for (var subrow = 0; subrow < colSize/10; subrow++)
{
var startX = subcol*10;
var startY = subrow*10;
var avg = 0;
for (var x=0; x<10; x++)
for (var y = 0; y < 10; y++)
avg += arr[startX + x, startY + y];
avg /= 10*10;
Console.WriteLine(avg);
}
}
It looks like you're new to SO. Next time try to post your attempt at the problem; it's better to fix your code.
The only challenge is figuring out the function, that given the subarray index we're trying to populate, would give you the correct row and column indexes in your original 100x100 array; the rest would just be a matter of copying the values:
// psuedocode
// given a subarrayIndex of 0 to 99, these will calculate the correct indices
rowIndexIn100x100Array = (subarrayIndex / 10) * 10 + subArrayRowIndexToPopulate;
colIndexIn100x100Array = (subarrayIndex % 10) * 10 + subArrayColIndexToPopulate;
I'll leave it as an exercise to you to deduce why the above functions correctly calculate the indices.
With the above, we can easily map the values:
var subArrays = new List<byte[,]>();
for (int subarrayIndex = 0; subarrayIndex < 100; subarrayIndex++)
{
var subarray = new byte[10, 10];
for (int i = 0; i < 10; i++)
for (int j = 0; j < 10; j++)
{
int rowIndexIn100x100Array = (subarrayIndex / 10) * 10 + i;
int colIndexIn100x100Array = (subarrayIndex % 10) * 10 + j;
subarray[i, j] = originalArray[rowIndexIn100x100Array, colIndexIn100x100Array];
}
subArrays.Add(subarray);
}
Once we have the 10x10 arrays, calculating the average would be trivial using LINQ:
var averages = new byte[10, 10];
for (int i = 0; i < 10; i++)
for (int j = 0; j < 10; j++)
{
averages[i, j] = (byte)subArrays[(i * 10) + j].Cast<byte>().Average(b => b);
}
Fiddle.
I'm trying to make a program that: When given a 4 digit number (for example, 1001) it sums the digits of that number, for 1001 this is 1 + 0 + 0 + 1 = 2, than it finds all sequences of 6 numbers from 1 to 6 (including permutations, i.e. 1*1*1*1*1*2 is different than 2*1*1*1*1*1) whose product is that number.
The result should be printed on the console in the following format: each sequence of 6 numbers, with their Morse representation, separated with a single pipe: 1 is .---- , 2 is ..---: .----|.----|.----|.----|..---|, on a new row the next permutation: .----|.----|.----|..---|.----| and so on.
The problem is, my solution doesn't show the correct answers, not even the correct number of them.
Here's my code (and please, if possible, tell me where my mistake is, and not some one line hack solutions to the problem with LINQ and regex and God knows what):
string n = Console.ReadLine();
string[] digitsChar = new string[n.Length];
for (int i = 0; i < 4; i++)
{
digitsChar[i] = n[i].ToString();
}
int[] digits = new int[digitsChar.Length];
for (int i = 0; i < 4; i++)
{
digits[i] = Convert.ToInt32(digitsChar[i]);
}
int morseProduct = digits.Sum();
Console.WriteLine(morseProduct);
List<int> morseCodeNumbers = new List<int>();
for (int i = 1; i < 6; i++)
{
for (int j = 1; i < 6; i++)
{
for (int k = 1; i < 6; i++)
{
for (int l = 1; i < 6; i++)
{
for (int m = 1; i < 6; i++)
{
for (int o = 1; o < 6; o++)
{
int product = i * j * k * l * m * o;
if (product == morseProduct)
{
morseCodeNumbers.Add(i);
morseCodeNumbers.Add(j);
morseCodeNumbers.Add(k);
morseCodeNumbers.Add(l);
morseCodeNumbers.Add(m);
morseCodeNumbers.Add(o);
}
}
}
}
}
}
}
int numberOfNumbers = morseCodeNumbers.Count;
string[] morseCodes = new string[] { "-----", ".----", "..---", "...--", "....-", "....." };
for (int i = 0; i < numberOfNumbers; i++)
{
int counter = 0;
if (i % 5 == 0)
{
Console.WriteLine();
counter = 0;
}
if (counter < 5)
{
int index = morseCodeNumbers[i];
Console.Write(morseCodes[index] + "|");
counter++;
}
A lot of your for-loop conditions refer to i instead of j,k,l and m. The same for the increment part. For example:
for (int j = 1; i < 6; i++)
should be
for (int j = 1; j < 6; j++)
Furthermore if the range is from 1 to 6 you need to change < to <=, see:
for (int i = 1; i <= 6; i++)
You don't need to convert the string to a string array to get the int array of digits btw, so while this is correct:
for (int i = 0; i < 4; i++)
{
digitsChar[i] = n[i].ToString();
}
int[] digits = new int[digitsChar.Length];
for (int i = 0; i < 4; i++)
{
digits[i] = Convert.ToInt32(digitsChar[i]);
}
you could it do like that (sry for LINQ):
var digits = n.Select(c=>(int)char.GetNumericValue(c) ).ToArray();
I need an 1D Convolution against 2 big arrays. I'm using this code in C# but it takes a loooong time to run.
I know, i know! FFT convolutions is very fast. But in this project i CAN'T use it.
It is a constraint of the project to not use FFT (please don't ask why :/).
This is my code in C# (ported from matlab, by the way):
var result = new double[input.Length + filter.Length - 1];
for (var i = 0; i < input.Length; i++)
{
for (var j = 0; j < filter.Length; j++)
{
result[i + j] += input[i] * filter[j];
}
}
So, anyone knows any fast convolution algorithm widthout FFT?
Convolution is numerically the same as a polynomial multiplication with an extra wrap-around step. Therefore, all the polynomial and large integer multiplication algorithms can be used to perform convolution.
FFT is the only way to get the fast O(n log(n)) run-time. But you can still get sub-quadratic run-time using the divide-and-conquer approaches like Karatsuba's algorithm.
Karatsuba's algorithm is fairly easy to implement once you understand how it works. It runs in O(n^1.585), and will probably be faster than trying to super-optimize the classic O(n^2) approach.
You could reduce the number of indexed accesses to result, as well as the Length properties:
int inputLength = filter.Length;
int filterLength = filter.Length;
var result = new double[inputLength + filterLength - 1];
for (int i = resultLength; i >= 0; i--)
{
double sum = 0;
// max(i - input.Length + 1,0)
int n1 = i < inputLength ? 0 : i - inputLength + 1;
// min(i, filter.Length - 1)
int n2 = i < filterLength ? i : filterLength - 1;
for (int j = n1; j <= n2; j++)
{
sum += input[i - j] * filter[j];
}
result[i] = sum;
}
If you further split the outer loop, you can get rid of some repeating conditionals. (This assumes 0 < filterLength ≤ inputLength ≤ resultLength)
int inputLength = filter.Length;
int filterLength = filter.Length;
int resultLength = inputLength + filterLength - 1;
var result = new double[resultLength];
for (int i = 0; i < filterLength; i++)
{
double sum = 0;
for (int j = i; j >= 0; j--)
{
sum += input[i - j] * filter[j];
}
result[i] = sum;
}
for (int i = filterLength; i < inputLength; i++)
{
double sum = 0;
for (int j = filterLength - 1; j >= 0; j--)
{
sum += input[i - j] * filter[j];
}
result[i] = sum;
}
for (int i = inputLength; i < resultLength; i++)
{
double sum = 0;
for (int j = i - inputLength + 1; j < filterLength; j++)
{
sum += input[i - j] * filter[j];
}
result[i] = sum;
}
You can use a special IIR filter.
Then process that as like:
y(n)= a1*y(n-1)+b1*y(n-2)...+a2*x(n-1)+b2*x(n-2)......
I think it's faster.
Here are two possibilities that may give minor speedups, but you'd need to test to be sure.
Unroll the inner loop to remove some tests. This will be easier if you know the filter length will always be a multiple if some N.
Reverse the order of the loops. Do filter.length passes over the whole array. This does less dereferencing in the inner loop but may have worse caching behavior.