I came across the code below for implementing sorting array.
I have applied to a very long array and it was able to do so in under a sec may be 20 millisec or less.
I have been reading about Algorithm complexity and the Big O notation and would like to know:
Which is the sorting method (of the existing ones) that is implemented in this code.
What is the complexity of the algorithm used here.
If you were to improve the algorithm/ code below what would you alter.
using System;
using System.Text;
//This program sorts an array
public class SortArray
{
static void Main(String []args)
{
// declaring and initializing the array
//int[] arr = new int[] {3,1,4,5,7,2,6,1, 9,11, 7, 2,5,8,4};
int[] arr = new int[] {489,491,493,495,497,529,531,533,535,369,507,509,511,513,515,203,205,207,209,211,213,107,109,111,113,115,117,11913,415,417,419,421,423,425,427,15,17,19,21,4,517,519,521,523,525,527,4,39,441,443,445,447,449,451,453,455,457,459,461,537,539,541,543,545,547,1,3,5,7,9,11,13,463,465,467,23,399,401,403,405,407,409,411,499,501,503,505,333,335,337,339,341,343,345,347,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,9,171,173,175,177,179,181,183,185,187,269,271,273,275,277,279,281,283,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,133,135,137,139,141,143,145,285,287,289,291,121,123,125,127,129,131,297,299,373,375,377,379,381,383,385,387,389,97,99,101,103,105,147,149,151,153,155,157,159,161,163,165,167,16,391,393,395,397,399,401,403,189,191,193,195,197,199,201,247,249,251,253,255,257,259,261,263,265,267,343,345,347,349,501,503,505,333,335,337,339,341,417,419,421,423,425,561,563,565,567,569,571,573,587,589,591,593,595,597,599,427,429,431,433,301,303,305,307,309,311,313,315,317,319,321,323,325,327,329,331,371,359,361,363,365,367,369,507,509,511,513,515,351,353,355,57,517,519,521,523,525,527,413,415,405,407,409,411,499,435,437,469,471,473,475,477,479,481,483,485,487,545,547,549,551,553,555,575,577,579,581,583,585,557,559,489,491,493,495,497,529,531,533,535,537,539,541,543,215,217,219,221,223,225,227,229,231,233,235,237,239,241,243,245,293,295};
int temp;
// traverse 0 to array length
for (int i = 0; i < arr.Length ; i++)
{
// traverse i+1 to array length
//for (int j = i + 1; j < arr.Length; j++)
for (int j = i+1; j < arr.Length; j++)
{
// compare array element with
// all next element
if (arr[i] > arr[j]) {
///Console.WriteLine(i+"i before"+arr[i]);
temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
//Console.WriteLine("i After"+arr[i]);
}
}
}
// print all element of array
foreach(int value in arr)
{
Console.Write(value + " ");
}
}
}
This is selection sort. It's time complexity is O(𝑛²). It has nested loops over i and j, and you can see these produce every possible set of two indices in the range {0,...,𝑛-1}, where 𝑛 is arr.Length. The number of pairs is a triangular number, and is equal to:
𝑛(𝑛-1)/2
...which is O(𝑛²)
If we stick to selection sort, we can still find some improvements.
We can see that the role of the outer loop is to store in arr[i] the value that belongs there in the final sorted array, and never touch that entry again. It does so by searching the minimum value in the right part of the array that starts at this index 𝑖.
Now during that search, which takes place in the inner loop, it keeps swapping lesser values into arr[i]. This may happen a few times, as it might find even lesser values as j walks to the right. That is a waste of operations, as we would prefer to only perform one swap. And this is possible: instead of swapping immediately, delay this operation. Instead keep track of where the minimum value is located (initially at i, but this may become some j index). Only when the inner loop completes, perform the swap.
There is less important improvement: i does not have to get equal to arr.Length - 1, as then there are no iterations of the inner loop. So the ending condition for the outer loop can exclude that iteration from happening.
Here is how that looks:
for (int i = 0, last = arr.Length - 1; i < last; i++)
{
int k = i; // index that has the least value in the range arr[i..n-1] so far
for (int j = i+1; j < arr.Length; j++)
{
if (arr[k] > arr[j]) {
k = j; // don't swap yet -- just track where the minimum is located
}
}
if (k > i) { // now perform the swap
int temp = arr[i];
arr[i] = arr[k];
arr[k] = temp;
}
}
A further improvement can be to use the inner loop to not only locate the minimum value, but also the maximum value, and to move the found maximum to the right end of the array. This way both ends of the array get sorted gradually, and the inner loop will shorten twice as fast. Still, the number of comparisons remains the same, and the average number of swaps as well. So the gain is only in the iteration overhead.
This is "Bubble sort" with O(n^2). You can use "Mergesort" or "Quicksort" to improve your algorithm to O(n*log(n)). If you always know the minimum and maximum of your numbers, you can use "Digit sort" or "Radix Sort" with O(n)
See: Sorting alrogithms in c#
As part of learning c# I engage in codesignal challenges. So far everything is going good for me, except for the test stated in the title.
The problem is that my code is not efficient enough to run under 3 seconds when the length of an array is 10^5 and the number of consecutive elements (k) is 1000. My code runs as follows:
int arrayMaxConsecutiveSum(int[] inputArray, int k) {
int sum = 0;
int max = 0;
for (int i = 0; i <= inputArray.Length-k; i++)
{
sum = inputArray.Skip(i).Take(k).Sum();
if (sum > max)
max = sum;
}
return max;
}
All visible tests in the website run OK, but when it comes to hidden test, in test 20, an error occured, stating that
19/20 tests passed. Execution time limit exceeded on test 20: Program exceeded the execution time limit. Make sure that it completes execution in a few seconds for any possible input.
I also tried unlocking solutions but on c# the code is somewhat similar to this but he didn't use LINQ. I also tried to run it together with the hidden tests but same error occurred, which is weird as how it was submitted as a solution when it didn't even passed all tests.
Is there any faster way to get the sum of an array?
I also thought of unlocking the hidden tests, but I think it won't give me any specific solution as the problem would still persists.
It would seem that you are doing the addition of k numbers for every loop. This pseudo code should be more efficient:
Take the sum of the first k elements and set this to be the max.
Loop as you had before, but each time subtract from the existing sum the element at i-1 and add the element at i + k.
Check for max as before and repeat.
The difference here is about the number of additions in each loop. In the original code you add k elements for every loop, in this code, within each loop you subtract a single element and add a single element to an existing sum, so this is 2 operations versus k operations. Your code starts to slow down as k gets large for large arrays.
For this specific case, I would suggest you not to use Skip method as it iterates on the collection every time. You can check the Skip implementation at here. Copying the code for reference.
public static IEnumerable<TSource> Skip<TSource>(this IEnumerable<TSource> source, int count) {
if (source == null) throw Error.ArgumentNull("source");
return SkipIterator<TSource>(source, count);
}
static IEnumerable<TSource> SkipIterator<TSource>(IEnumerable<TSource> source, int count) {
using (IEnumerator<TSource> e = source.GetEnumerator()) {
while (count > 0 && e.MoveNext()) count--;
if (count <= 0) {
while (e.MoveNext()) yield return e.Current;
}
}
}
As you can see Skip iterates the collection everytime, so if you have a huge collection with k as a high number, than you can see the execution time sluggish.
Instead of using Skip, you can write simple for loop which iterates required items:
public static int arrayMaxConsecutiveSum(int[] inputArray, int k)
{
int sum = 0;
int max = 0;
for (int i = 0; i <= inputArray.Length-k; i++)
{
sum = 0;
for (int j = i; j < k + i; j++)
{
sum += inputArray[j];
}
if (sum > max)
max = sum;
}
return max;
}
You can check this dotnet fiddle -- https://dotnetfiddle.net/RrUmZX where you can compare the time difference. For through benchmarking, I would suggest to look into Benchmark.Net.
You need to be careful when using LINQ and thinking about performance. Not that it's slow, but that it can easily hide a big operation behind a single word. In this line:
sum = inputArray.Skip(i).Take(k).Sum();
Skip(i) and Take(k) will both take approximately as long as a for loop, stepping through thousands of rows, and that line is run for every one of the thousands of items in the main loop.
There's no magic command that is faster, instead you have to rethink your approach to do the minimum of steps inside the loop. In this case you could remember the sum from each step and just add or remove individual values, rather than recalculating the whole thing every time.
public static int arrayMaxConsecutiveSum(int[] inputArray, int k)
{
int sum = 0;
int max = 0;
for (int i = 0; i <= inputArray.Length-k; i++)
{
// Add the next item
sum += inputArray[i];
// Limit the sum to k items
if (i > k) sum -= inputArray[i-k];
// Is this the highest sum so far?
if (sum > max)
max = sum;
}
return max;
}
This is my solution.
public int ArrayMaxConsecutiveSum(int[] inputArray, int k)
{
int max = inputArray.Take(k).Sum();
int sum = max;
for (int i = 1; i <= inputArray.Length - k; i++)
{
sum = sum - inputArray[i- 1] + inputArray[i + k - 1];
if (sum > max)
max = sum;
}
return max;
}
Yes you should never run take & skip on large lists but here's a purely LINQ based solution that is both easy to understand and performs the task in sufficient time. Yes iterative code will still outperform it so you have to take the trade off for your use case. Benchmarks because of size or easy to understand
int arrayMaxConsecutiveSum(int[] inputArray, int k)
{
var sum = inputArray.Take(k).Sum();
return Math.Max(sum, Enumerable.Range(k, inputArray.Length - k)
.Max(i => sum += inputArray[i] - inputArray[i - k]));
}
I am trying to create an infinite map of 1 and 0's procedurally generated by PRNG but only storing a grid of 7x7 which is viewable. e.g. on the screen you will see a 7x7 grid which shows part of the map of 1's and 0's and when you push right it then shifts the 1's and 0's across and puts the next row in place.
If you then shift left as it is pseudo randomly generated it regenerates and shows the original. You will be able to shift either way and up and down infinitely.
The problem I have is that the initial map does not look that random and the pattern repeats itself as you scroll right or left and I think it is something to do with how I generate the numbers. e.g. you could start at viewing x=5000 and y= 10000 and therefore it need to generate a unique seed from these two values.
(Complexity is the variable for the size of the viewport)
void CreateMap(){
if(complexity%2==0) {
complexity--;
}
if (randomseed) {
levelseed=Time.time.ToString();
}
psuedoRandom = new System.Random(levelseed.GetHashCode());
offsetx=psuedoRandom.Next();
offsety=psuedoRandom.Next();
levellocation=psuedoRandom.Next();
level = new int[complexity,complexity];
middle=complexity/2;
for (int x=0;x<complexity;x++){
for (int y=0;y<complexity;y++){
int hashme=levellocation+offsetx+x*offsety+y;
psuedoRandom = new System.Random(hashme.GetHashCode());
level[x,y]=psuedoRandom.Next(0,2);
}
}
}
This is what I use for left and right movement,
void MoveRight(){
offsetx++;
for (int x=0;x<complexity-1;x++){
for (int y=0;y<complexity;y++){
level[x,y]=level[x+1,y];
}
}
for (int y=0;y<complexity;y++){
psuedoRandom = new System.Random(levellocation+offsetx*y);
level[complexity-1,y]=psuedoRandom.Next(0,2);
}
}
void MoveLeft(){
offsetx--;
for (int x=complexity-1;x>0;x--){
for (int y=0;y<complexity;y++){
level[x,y]=level[x-1,y];
}
}
for (int y=0;y<complexity;y++){
psuedoRandom = new System.Random(levellocation+offsetx*y);
level[0,y]=psuedoRandom.Next(0,2);
}
}
To distill it down I need to be able to set
Level[x,y]=Returnrandom(offsetx,offsety)
int RandomReturn(int x, int y){
psuedoRandom = new System.Random(Whatseedshouldiuse);
return (psuedoRandom.Next (0,2));
}
Your problem I think is that you are creating a 'new' random in loops... don't redeclare inside loops as you are... instead declare it at class level and then simply use psuedoRandom.Next e.g. See this SO post for an example of the issue you are experiencing
instead of re-instantiating a Random() class at every iteration like you are doing:
for (int x=0;x<complexity;x++){
for (int y=0;y<complexity;y++){
int hashme=levellocation+offsetx+x*offsety+y;
psuedoRandom = new System.Random(hashme.GetHashCode());
level[x,y]=psuedoRandom.Next(0,2);
}
}
do something more like
for (int x=0;x<complexity;x++){
for (int y=0;y<complexity;y++){
// Give me the next random integer
moreRandom = psuedoRandom.Next();
}
}
EDIT: As Enigmativity has pointed out in a comment below this post, reinstiating at every iteration is also a waste of time / resources too.
P.S. If you really need to do it then why not use the 'time-dependent default seed value' instead of specifying one?
I'd use SipHash to hash the coordinates:
It's relatively simple to implement
It takes both a key (seed for your map) and a message (the coordinates) as inputs
You can increase or decrease the number of rounds as a quality/performance trade-off.
Unlike your code, the results will be reproducible across processes and machines.
The SipHash website lists two C# implementations:
https://github.com/tanglebones/ch-siphash
https://github.com/BrandonHaynes/siphash-csharp
I've play a bit to create rnd(x, y, seed):
class Program
{
const int Min = -1000; // define the starting point
static void Main(string[] args)
{
for (int x = 0; x < 10; x++)
{
for (int y = 0; y < 10; y++)
Console.Write((RND(x, y, 123) % 100).ToString().PadLeft(4));
Console.WriteLine();
}
Console.ReadKey();
}
static int RND(int x, int y, int seed)
{
var rndX = new Random(seed);
var rndY = new Random(seed + 123); // a bit different
// because Random is LCG we can only move forward
for (int i = Min; i < x - 1; i++)
rndX.Next();
for (int i = Min; i < y - 1; i++)
rndY.Next();
// return some f(x, y, seed) = function(rnd(x, seed), rnd(y, seed))
return rndX.Next() + rndY.Next() << 1;
}
}
It works, but ofc an afwul implementation (because Random is LCG), it should give a general idea though. It's memory efficient (no arrays). Acceptable compromise is to implement value caching, then while located in certain part of map surrounding values will be taken from cache instead of generating.
For same x, y and seed you will always get same value (which in turn can be used as a cell seed).
TODO: find a way to make Rnd(x) and Rnd(y) without LCG and highly inefficient initialization (with cycle).
I solved it with this, after all of you suggested approaches.
void Drawmap(){
for (int x=0;x<complexity;x++){
for (int y=0;y<complexity;y++){
psuedoRandom = new System.Random((x+offsetx)*(y+offsety));
level[x,y]=psuedoRandom.Next (0,2);
}
}