C# while loop not exiting (quicksort) - c#

for some reason, my while loop for (i<=j) does not end when j goes lower than i.
I watched the debug values, and have repeatedly seen values of (4,3), (6,5) etc for i and j (respectively).
public static List<Item> QuickSort(List<Item> a, int left, int right)
{
int i = left;
int j = right;
double pivotValue = ((left + right) / 2);
Item x = a[Convert.ToInt32(pivotValue)];
Item w;
while (i <= j)
{
//these while loops continue looping after i<=j is false
while (a[i] < x)
{
i++;
}
while (x < a[j])
{
j--;
}
if (i <= j)
{
w = a[i];
a[i++] = a[j];
a[j--] = w;
}
}
if (left < j)
{
QuickSort(a, left, j);
}
if (i < right)
{
QuickSort(a, i, right);
}
return a;
}

I'll bet it's either the recursion or one of the internal while loops that's still going on. I don't have the quicksort algorithm memorized, but I'd wager that if you broke in the debugger you'd find yourself in one of the two inner while loops.

Coding quicksort is a very good exercice, it's not as easy as it may look to get everything right! ;)
Here's one problem with your code. Consider what happens if j bumps into the pivot, while i does not. You exchange the pivot (at j) with the value at i and increment i past the pivot. This cannot continue correctly (if you understand Quicksort you should understand why).
After choosing the pivot, I like to exchange it with one bound (either left or right), go exchange values until i and j meet and then put the pivot back at this place. There are other ways, of course.

It is not a infinite loop.
while (a[i] < x)
{
i++;
}
If x is the largest number in the list, it will raise 'i' as long as the array is out of Range, and the programm will throw a OutOfRangeException.

What happens if i==j and a[i]==x...? No increment, no decrement, just swap and continue...
Consider changing while(i<=j) to while(i<j). Also swap is not necessary if i==j...

Related

Which is the more efficient way to swap these list elements in C#? Why do they behave differently?

I'm doing brainteasers and exercises on hackerrank to prepare for an interview test and came across some weird behaviour.
The taks in question is Minimum Swaps 2 from the Interview Preparation Kit's Arrays section.
Depending on which of the two elements I put into my temporary variable, it either passes or fails on timeout on most test cases.
Version A:
static int minimumSwaps(int[] arr) {
int swaps=0;
int tmp=0;
int n = arr.Length;
for(int i = 0; i < n; i++){
if(arr[i]==i+1) continue;
tmp = arr[arr[i]-1];
arr[arr[i]-1] = arr[i];
arr[i] = tmp;
i--;
swaps++;
}
return swaps;
}
Version B
static int minimumSwaps(int[] arr) {
int swaps=0;
int tmp=0;
int n = arr.Length;
for(int i = 0; i < n; i++){
if(arr[i]==i+1) continue;
tmp = arr[i];
arr[i] = arr[arr[i]-1];
arr[arr[i]-1]=tmp;
i--;
swaps++;
}
return swaps;
}
A passes, B times out on all but one test.
What it actually does is find the minimum amount of swaps to sort an array of consecutive integers. That's why i can be used to compare values and indices.
If arr[i] is on the corresponding index, we continue, if it's wrong, we swap it with the index it's supposed to be at, decrement i and repeat.
I'm honestly at a loss as to why storing arr[arr[i]-1] in tmp is less complicated than storing arr[i] first. It must have something to do with the intermediate machine code it get's compiled to, right?
I'd be grateful for any explanation so I don't run into similar issues in the future.

Sort two dimensional string array by id using insertion sort - C#

I'm new here and sorry If my question is stupid, but I really need you help.
I need to sort that two dimensional string array by id (the first column):
string [,] a = new string [,]
{
{"2","Pena","pena"},
{"1","Kon","kon"},
{"5","Sopol","sopol"},
{"4","Pastet","pastet"},
{"7","Kuche","kuche"}
};
The problem is that I'm sorting only the number and I want after them to sort the words. That's what I did so far
static void Main(string[] args)
{
string [,] a = new string [,]
{
{"2","Pena","pena"},
{"1","Kon","kon"},
{"5","Sopol","sopol"},
{"4","Pastet","pastet"},
{"7","Kuche","kuche"}
};
int b = a.GetLength(0);
Console.WriteLine(b);
Console.WriteLine(a[0,0]);
Console.WriteLine(a[0,1]);
Console.WriteLine(a[1,0]);
InsertionSort(a, b);
Console.WriteLine();
Console.Write("Sorted Array: ");
printArray(a);
Console.WriteLine();
Console.Write("Press any key to close");
Console.ReadKey();
}
public static void InsertionSort(string[,] iNumbers, int iArraySize)
{
int i, j, index;
for (i = 1; i < iArraySize; i++)
{
for (int k = 0; k < iNumbers.GetLength(1); k++)
{
index = Convert.ToInt32(iNumbers[i, 0]);
j = i;
while ((j > 0) && (Convert.ToInt32(iNumbers[j - 1, 0]) > index))
{
iNumbers[j, k] = iNumbers[j - 1, k];
j = j - 1;
}
iNumbers[j, 0] = Convert.ToString(index);
}
}
}
static void printArray(string[,] iNumbers)
{
for (int i = 0; i < iNumbers.GetLength(0); i++)
{
for (int k = 0; k < iNumbers.GetLength(1); k++)
{
Console.Write(iNumbers[i, k] + " ");
}
}
Console.WriteLine();
}
Unfortunatelly as output I get
1 Pena pena 2 Kon kon 4 Sopol sopol 5 Pastet pastet 7 Kuche kuche
I would be really grateful if you could help me.
Based on the nature of the example and the question, I am guessing that this is a homework assignment and so must be implemented in a fashion that is a) not far from your current example, and b) actually demonstrates an insertion sort.
With that in mind, the following is a corrected version of your example that works:
public static void InsertionSort(string[,] iNumbers, int iArraySize)
{
int i, j, index;
for (i = 1; i < iArraySize; i++)
{
index = Convert.ToInt32(iNumbers[i, 0]);
j = i;
while ((j > 0) && (Convert.ToInt32(iNumbers[j - 1, 0]) > index))
{
for (int k = 0; k < iNumbers.GetLength(1); k++)
{
string temp = iNumbers[j, k];
iNumbers[j, k] = iNumbers[j - 1, k];
iNumbers[j - 1, k] = temp;
}
j = j - 1;
}
}
}
I made two key changes to your original code:
I rearranged the k and j loops so that the k loop is the inner-most loop, rather than the j loop. Your j loop is the one performing the actual sort, while the k loop is what should be actually moving a row for an insertion operation.
In your original example, you had this reversed, with the result that by the time you went to sort anything except the index element of a row, everything looked sorted to the code (because it's only comparing the index element) and so nothing else got moved.
With the above example, the insertion point is determined first, and then the k loop is used simply to do the actual insertion.
I added logic to actually swap the elements. In your original code, there wasn't really a swap there. You had hard-coded the second part of a swap, simply copying the index element to the target, so the swap did work for the index element. But it wouldn't have achieved the swap for any other element; instead, you'd just have overwritten data.
With the above, a proper, traditional swap is used: one of the values to be swapped is copied to a temp local variable, the other value to be swapped is copied to the location of the first value, and then finally the saved value is copied to the location of the second.
The above should be good enough to get you back on track with your assignment. However, I will mention that you can get rid of the k loop altogether if your teacher will allow you to implement this using jagged arrays (i.e. a single-dimensional array containing several other single-dimensional arrays), or by using a second "index array" (i.e. where you swap the indexes relative to the original array, but leave the original array untouched).

Sequentially removing items in a 2D array on a position's row and column, starting from the center

I'm having another problem in my Bejeweled clone. I want to make Star Gems act like they do in Bejeweled 3, meaning they destroy gems outward from the star gem(the center). So, say the star gem was at (4, 4) in a 10x10 2D array; it would destroy the positions (3, 4), (5, 4), (4, 3) and (4, 5) first, then, say, 10 frames later, destroy (2, 4), (6, 4), (4, 2), and (4, 6), and so on.
Right now I have the StarDestruction() method storing the position of the star gem to a couple of Board-scope variables, and the positions to destroy in a List<Gem>, like so:
Board.starPosX = i;
Board.starPosY = j;
for (int x = 0; x < gems.GetLength(0); x++)
{
moveTimer = 0;
int k = x;
int m = x;
int q = x;
int n = x;
if (i - k < 0) k = 0;
if (i + m > gems.GetLength(0) - 1) m = 0;
if (j - q < 0) q = 0;
if (j + n > gems.GetLength(1) - 1) n = 0;
gemQ.Add(gems[i - k, j]);
gemQ.Add(gems[i + m, j]);
gemQ.Add(gems[i, j - q]);
gemQ.Add(gems[i, j + n]);
}
where gemQ is the List<Gem> and gems is the 2D Gem array.
This is how I currently destroy the gems, in Update():
foreach (Gem g in gemQ)
{
if (timer2 % 12 == 0)
g.KillGem(gems[starPosX, starPosY]);
}
where timer2 is the timer for destroying the gems.
I have a bit simpler code for the original gem destroying, but it didn't seem to work any differently than this version. Here's the simpler code:
for (int x = 0; x < gems.GetLength(0); x++)
{
if (x != i)
{
gems[x, j].KillGem(gems[i, j]);
}
if (x != j)
{
gems[i, x].KillGem(gems[i, j]);
}
}
Any ideas?
Complete edit of my reply, based on our conversation in the comments.
I understand now that:
You want the star gem to destroy all other gems in the same column and same row as the star gem.
You want four gems to be destroyed at a time, with a delay between each four.
The explosion should move outward from the star gem, i.e. destroying the closest gems first.
Your foreach uses the time like this:
Timer % 12 == 0
At the time that is true for one gem, its true for all of them typically. You don't want to stall between destructions either, otherwise the destruction won't get rendered or the game will visibly lag.
The second issue is that even if you did space out the destruction of the gems, you'll likely find that the destruction occurs in a spiral, instead of four at a time.
With these points in mind, you'll need to do this instead:
// The initial destroy gem code
var gemsToDestroy = new List<Gem>();
for (int x = 0; x < gems.GetLength(0); x++)
{
if (x != i)
{
gemsToDestroy.add(gems[x, j]);
}
if (x != j)
{
gemsToDestroy.add(gems[i, x]);
}
}
// You can change your for loop above to achieve this directly, but this is the idea
// We are putting them in order of closest first.
gemsToDestroy = gemsToDestroy.OrderBy(o => o.DistanceFromStar).ToList();
// Your periodic UPDATE code - This is pseudo code but should convey the general idea
// I've been very lazy with my use of LINQ here, you should refactor this solution to
// to remove as many iterations of the list as possible.
if (gemsToDestroy.Any() && timer.Ready)
{
var closestDistance = gemsToDestroy[0].DistanceFromStar;
foreach (var gem in gemsToDestroy.Where(gem => gem.DistanceFromStar == closestDistance))
{
gem.Destroy();
}
// Again you can do this without LINQ, point is I've removed the now destroyed gems from the list
gemsToDestroy = gemsToDestroy.Where(gem => gem.DistanceFromStar != closestDistance).ToList();
timer.Reset(); // So that we wait X time before destroying the next set
}
Don't forget to prevent player input while there are items in the gemsToDestroy list and also to stop the game timer while destroying, so that the player isn't penalised time for playing well.

The negamax algorithm..what's wrong?

I'm trying to program a chess game and have spent days trying to fix the code. I even tried min max but ended with the same result. The AI always starts at the corner, and moves a pawn out of the way then the rook just moves back and forth with each turn. If it get's eaten, the AI moves every piece from one side to the other until all are eaten. Do you know what could be wrong with the following code?
public Move MakeMove(int depth)
{
bestmove.reset();
bestscore = 0;
score = 0;
int maxDepth = depth;
negaMax(depth, maxDepth);
return bestmove;
}
public int EvalGame() //calculates the score from all the pieces on the board
{
int score = 0;
for (int i = 0; i < 8; i++)
{
for (int j = 0; j < 8; j++)
{
if (AIboard[i, j].getPiece() != GRID.BLANK)
{
score += EvalPiece(AIboard[i, j].getPiece());
}
}
}
return score;
}
private int negaMax(int depth, int maxDepth)
{
if (depth <= 0)
{
return EvalGame();
}
int max = -200000000;
for (int i = 0; i < 8; i++)
{
for (int j = 0; j < 8; j++)
{
for (int k = 0; k < 8; k++)
{
for (int l = 0; l < 8; l++)
{
if(GenerateMove(i, j, k, l)) //generates all possible moves
{
//code to move the piece on the board
board.makemove(nextmove);
score = -negaMax(depth - 1, maxDepth);
if( score > max )
{
max = score;
if (depth == maxDepth)
{
bestmove = nextmove;
}
}
//code to undo the move
board.undomove;
}
}
}
}
}
return max;
}
public bool GenerateMove(int i, int j, int k, int l)
{
Move move;
move.moveFrom.X = i;
move.moveFrom.Y = j;
move.moveTo.X = k;
move.moveTo.Y = l;
if (checkLegalMoves(move.moveTo, move.moveFrom)) //if a legal move
{
nextMove = move;
return true;
}
return false;
}
This code:
public Move MakeMove(int depth)
{
bestscore = 0;
score = 0;
int maxDepth = depth;
negaMax(depth, maxDepth);
return bestmove;
}
Notice that the best move is never set! The return score of negaMax is compared to move alternatives. You're not even looping over the possible moves.
Also, it's really hard to look for errors, when the code you submit is not fully consistent. The negaMax method takes two arguments one place in your code, then it take four arguments in the recursive call?
I also recommend better abstraction in your code. Separate board representation, move representation, move generation, and the search algorithm. That will help you a lot. As an example: Why do you need the depth counter in the move generation?
-Øystein
You have two possible issues:
It is somewhat ambiguous as you don't show us your variable declarations, but I think you are using too many global variables. Negamax works by calculating best moves at each node, and so while searching the values and moves should be local. In any case, it is good practice to keep the scope of variables as tight as possible. It is harder to reason about the code when traversing the game tree changes so many variables. However, your search looks like it should return the correct values.
Your evaluation does not appear to discriminate which side is playing. I don't know if EvalPiece handles this, but in any case evaluation should be from the perspective of whichever side currently has the right to move.
You also have other issues that are not directly to your problem:
Your move generation is scary. You're pairwise traversing every possible pair of from/to squares on the board. This is highly inefficient and I don't understand how such a method would even work. You need only to loop through all the pieces on the board, or for a slower method, every square on the board (instead of 4096 squares).
MakeMove seems like it may be the place for the root node. Right now, your scheme works, in that the last node the search exits from will be root. However, it is common to use special routines at the root such as iterative deepening, so it may be good to have a separate loop at the root.

faster implementation of sum ( for Codility test )

How can the following simple implementation of sum be faster?
private long sum( int [] a, int begin, int end ) {
if( a == null ) {
return 0;
}
long r = 0;
for( int i = begin ; i < end ; i++ ) {
r+= a[i];
}
return r;
}
EDIT
Background is in order.
Reading latest entry on coding horror, I came to this site: http://codility.com which has this interesting programming test.
Anyway, I got 60 out of 100 in my submission, and basically ( I think ) is because this implementation of sum, because those parts where I failed are the performance parts. I'm getting TIME_OUT_ERROR's
So, I was wondering if an optimization in the algorithm is possible.
So, no built in functions or assembly would be allowed. This my be done in C, C++, C#, Java or pretty much in any other.
EDIT
As usual, mmyers was right. I did profile the code and I saw most of the time was spent on that function, but I didn't understand why. So what I did was to throw away my implementation and start with a new one.
This time I've got an optimal solution [ according to San Jacinto O(n) -see comments to MSN below - ]
This time I've got 81% on Codility which I think is good enough. The problem is that I didn't take the 30 mins. but around 2 hrs. but I guess that leaves me still as a good programmer, for I could work on the problem until I found an optimal solution:
Here's my result.
I never understood what is those "combinations of..." nor how to test "extreme_first"
I don't think your problem is with the function that's summing the array, it's probably that you're summing the array WAY to frequently. If you simply sum the WHOLE array once, and then step through the array until you find the first equilibrium point you should decrease the execution time sufficiently.
int equi ( int[] A ) {
int equi = -1;
long lower = 0;
long upper = 0;
foreach (int i in A)
upper += i;
for (int i = 0; i < A.Length; i++)
{
upper -= A[i];
if (upper == lower)
{
equi = i;
break;
}
else
lower += A[i];
}
return equi;
}
Here is my solution and I scored 100%
public static int solution(int[] A)
{
double sum = A.Sum(d => (double)d);
double leftSum=0;
for (int i = 0; i < A.Length; i++){
if (leftSum == (sum-leftSum-A[i])) {
return i;
}
else {
leftSum = leftSum + A[i];
}
}
return -1;
}
If this is based on the actual sample problem, your issue isn't the sum. Your issue is how you calculate the equilibrium index. A naive implementation is O(n^2). An optimal solution is much much better.
This code is simple enough that unless a is quite small, it's probably going to be limited primarily by memory bandwidth. As such, you probably can't hope for any significant gain by working on the summing part itself (e.g., unrolling the loop, counting down instead of up, executing sums in parallel -- unless they're on separate CPUs, each with its own access to memory). The biggest gain will probably come from issuing some preload instructions so most of the data will already be in the cache by the time you need it. The rest will just (at best) get the CPU to hurry up more, so it waits longer.
Edit: It appears that most of what's above has little to do with the real question. It's kind of small, so it may be difficult to read, but, I tried just using std::accumulate() for the initial addition, and it seemed to think that was all right:
Some tips:
Use a profiler to identify where you're spending a lot of time.
Write good performance tests so that you can tell the exact effect of every single change you make. Keep careful notes.
If it turns out that the bottleneck is the checks to ensure that you're dereferencing a legal address inside the array, and you can guarantee that begin and end are in fact both inside the array, then consider fixing the array, making a pointer to the array, and doing the algorithm in pointers rather than arrays. Pointers are unsafe; they do not spend any time checking to make sure you're still inside the array, so therefore they can be somewhat faster. But you take responsibility then for ensuring that you do not corrupt every byte of memory in the address space.
I don't believe the problem is in the code you provided, but somehow the bigger solution must be suboptimal. This code looks good for calculating the sum of one slice of the array, but maybe it's not what you need to solve the whole problem.
Probably the fastest you could get would be to have your int array 16-byte aligned, stream 32 bytes into two __m128i variables (VC++) and call _mm_add_epi32 (again, a VC++ intrinsic) on the chunks. Reuse one of the chunks to keep adding into it and on the final chunk extract your four ints and add them the old fashioned way.
The bigger question is why simple addition is a worthy candidate for optimization.
Edit: I see it's mostly an academic exercise. Perhaps I'll give it a go tomorrow and post some results...
In C# 3.0, my computer and my OS this is faster as long as you can guarantee that 4 consecutive numbers won't overflow the range of an int, probably because most additions are done using 32-bit math.
However using a better algorithm usually provides higher speed up than any micro-optimization.
Time for a 100 millon elements array:
4999912596452418 -> 233ms (sum)
4999912596452418 -> 126ms (sum2)
private static long sum2(int[] a, int begin, int end)
{
if (a == null) { return 0; }
long r = 0;
int i = begin;
for (; i < end - 3; i+=4)
{
//int t = ;
r += a[i] + a[i + 1] + a[i + 2] + a[i + 3];
}
for (; i < end; i++) { r += a[i]; }
return r;
}
This won't help you with an O(n^2) algorithm, but you can optimize your sum.
At a previous company, we had Intel come by and give us optimization tips. They had one non-obvious and somewhat cool trick. Replace:
long r = 0;
for( int i = begin ; i < end ; i++ ) {
r+= a[i];
}
with
long r1 = 0, r2 = 0, r3 = 0, r4 = 0;
for( int i = begin ; i < end ; i+=4 ) {
r1+= a[i];
r2+= a[i + 1];
r3+= a[i + 2];
r4+= a[i + 3];
}
long r = r1 + r2 + r3 + r4;
// Note: need to be clever if array isn't divisible by 4
Why this is faster:
In the original implementation, your variable r is a bottleneck. Every time through the loop, you have to pull data from memory array a (which takes a couple cycles), but you can't do multiple pulls in parallel, because the value of r in the next iteration of the loop depends on the value of r in this iteration of the loop. In the second version, r1, r2, r3, and r4 are independent, so the processor can hyperthread their execution. Only at the very end do they come together.
Here is a thought:
private static ArrayList equi(int[] A)
{
ArrayList answer = new ArrayList();
//if(A == null) return -1;
if ((answer.Count == null))
{
answer.Add(-1);
return answer;
}
long sum0 = 0, sum1 = 0;
for (int i = 0; i < A.Length; i++) sum0 += A[i];
for (int i = 0; i < A.Length; i++)
{
sum0 -= A[i];
if (i > 0) { sum1 += A[i - 1]; }
if (sum1 == sum0) answer.Add(i);
//return i;
}
//return -1;
return answer;
}
If you are using C or C++ and develop for modern desktop systems and are willing to learn some assembler or learn about GCC intrinsics, you could use SIMD instructions.
This library is an example of what is possible for float and double arrays, similar results should be possible for integer arithmetic since SSE has integer instructions as well.
In C++, the following:
int* a1 = a + begin;
for( int i = end - begin - 1; i >= 0 ; i-- )
{
r+= a1[i];
}
might be faster.
The advantage is that we compare against zero in the loop.
Of course, with a really good optimizer there should be no difference at all.
Another possibility would be
int* a2 = a + end - 1;
for( int i = -(end - begin - 1); i <= 0 ; i++ )
{
r+= a2[i];
}
here we traversing the items in the same order, just not comparing to end.
I did the same naive implementation and here's my O(n) solution. I did not use the IEnumerable Sum method because it was not available at Codility. My solution still doesn't check for overflow in case the input has large numbers so it's failing that particular test on Codility.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ConsoleApplication2
{
class Program
{
static void Main(string[] args)
{
var list = new[] {-7, 1, 5, 2, -4, 3, 0};
Console.WriteLine(equi(list));
Console.ReadLine();
}
static int equi(int[] A)
{
if (A == null || A.Length == 0)
return -1;
if (A.Length == 1)
return 0;
var upperBoundSum = GetTotal(A);
var lowerBoundSum = 0;
for (var i = 0; i < A.Length; i++)
{
lowerBoundSum += (i - 1) >= 0 ? A[i - 1] : 0;
upperBoundSum -= A[i];
if (lowerBoundSum == upperBoundSum)
return i;
}
return -1;
}
private static int GetTotal(int[] ints)
{
var sum = 0;
for(var i=0; i < ints.Length; i++)
sum += ints[i];
return sum;
}
}
}
100% O(n) solution in C
int equi ( int A[], int n ) {
long long sumLeft = 0;
long long sumRight = 0;
int i;
if (n <= 0) return -1;
for (i = 1; i < n; i++)
sumRight += A[i];
i = 0;
do {
if (sumLeft == sumRight)
return i;
sumLeft += A[i];
if ((i+1) < n)
sumRight -= A[i+1];
i++;
} while (i < n);
return -1;
}
Probably not perfect but it passes their tests anyway :)
Can't say I'm a big fan of Codility though - it is an interesting idea, but I found the requirements a little too vague. I think I'd be more impressed if they gave you requirements + a suite of unit tests that test those requirements and then asked you to write code. That's how most TDD happens anyway. I don't think doing it blind really gains anything other than allowing them to throw in some corner cases.
private static int equi ( int[] A ) {
if (A == null || A.length == 0)
return -1;
long tot = 0;
int len = A.length;
for(int i=0;i<len;i++)
tot += A[i];
if(tot == 0)
return (len-1);
long partTot = 0;
for(int i=0;i<len-1;i++)
{
partTot += A[i];
if(partTot*2+A[i+1] == tot)
return i+1;
}
return -1;
}
I considered the array as a bilance so if the equilibrium index exist then half of the weight is on the left. So I only compare the partTot (partial total) x 2 with the total weight of the array.
the Alg takes O(n) + O(n)
100% correctness and performance of this code is tested
Private Function equi(ByVal A() As Integer) As Integer
Dim index As Integer = -1
If A.Length > 0 And Not IsDBNull(A) Then
Dim sumLeft As Long = 0
Dim sumRight As Long = ArraySum(A)
For i As Integer = 0 To A.Length - 1
Dim val As Integer = A(i)
sumRight -= val
If sumLeft = sumRight Then
index = i
End If
sumLeft += val
Next
End If
Return index
End Function
Just some thought, not sure if accessing the pointer directly be faster
int* pStart = a + begin;
int* pEnd = a + end;
while (pStart != pEnd)
{
r += *pStart++;
}
{In Pascal + Assembly}
{$ASMMODE INTEL}
function equi (A : Array of longint; n : longint ) : longint;
var c:Longint;
label noOverflow1;
label noOverflow2;
label ciclo;
label fine;
label Over;
label tot;
Begin
Asm
DEC n
JS over
XOR ECX, ECX {Somma1}
XOR EDI, EDI {Somma2}
XOR EAX, EAX
MOV c, EDI
MOV ESI, n
tot:
MOV EDX, A
MOV EDX, [EDX+ESI*4]
PUSH EDX
ADD ECX, EDX
JNO nooverflow1
ADD c, ECX
nooverflow1:
DEC ESI
JNS tot;
SUB ECX, c
SUB EDI, c
ciclo:
POP EDX
SUB ECX, EDX
CMP ECX, EDI
JE fine
ADD EDI, EDX
JNO nooverflow2
DEC EDI
nooverflow2:
CMP EAX, n
JA over
INC EAX
JMP ciclo
over:
MOV EAX, -1
fine:
end;
End;
This got me 100% in Javascript:
function solution(A) {
if (!(A) || !(Array.isArray(A)) || A.length < 1) {
return -1;
}
if (A.length === 1) {
return 0;
}
var sum = A.reduce(function (a, b) { return a + b; }),
lower = 0,
i,
val;
for (i = 0; i < A.length; i++) {
val = A[i];
if (((sum - lower) - val) === (lower)) {
return i;
}
lower += val;
}
return -1;
}
Here is my answer with with explanations on how to go about it. It will get you 100%
class Solution
{
public int solution(int[] A)
{
long sumLeft = 0; //Variable to hold sum of elements to the left of the current index
long sumRight = 0; //Variable to hold sum of elements to the right of the current index
long sum = 0; //Variable to hold sum of all elements in the array
long leftHolder = 0; //Variable that holds the sum of all elements to the left of the current index, including the element accessed by the current index
//Calculate the total sum of all elements in the array and store it in the sum variable
for (int i = 0; i < A.Length; i++)
{
//sum = A.Sum();
sum += A[i];
}
for (int i = 0; i < A.Length; i++)
{
//Calculate the sum of all elements before the current element plus the current element
leftHolder += A[i];
//Get the sum of all elements to the right of the current element
sumRight = sum - leftHolder;
//Get the sum of all elements of elements to the left of the current element.We don't include the current element in this sum
sumLeft = sum - sumRight - A[i];
//if the sum of the left elements is equal to the sum of the right elements. Return the index of the current element
if (sumLeft == sumRight)
return i;
}
//Otherwise return -1
return -1;
}
}
This may be old, but here is solution in Golang with 100% pass rate:
package solution
func Solution(A []int) int {
// write your code in Go 1.4
var left int64
var right int64
var equi int
equi = -1
if len(A) == 0 {
return equi
}
left = 0
for _, el := range A {
right += int64(el)
}
for i, el := range A {
right -= int64(el)
if left == right {
equi = i
}
left += int64(el)
}
return equi
}

Categories

Resources