Traversing a 2D array as quickly as possible - c#

Let's say i have a fairly large 2D array (a bit over 2 million entries) containing some basic objects (in my case, it's a 2d array of pixel objects).
My goal is to loop through every item in the 2D array and do some basic modification to the object, such as changing one of its properties or something.
Currently i am just looping through it with a foreach and accessing each object via the loop.
This takes around 2 seconds to accomplish with C#.
I am wondering if there is a faster way for me to accomplish this. I wrote some similar code in c++ a long time ago and it performed almost instantaneously, so i'm a bit disappointed that performance has suffered a bit in C#.
Can i somehow speed things up with asynchronous code perhaps? Am i doing something in a less-than-optimal way?
foreach (Pixel p in my2Darray.Pixels) //this is a 2d array but we can still just loop through it with a single for loop
{
p.SetColor(Color.red);
}

Related

Divide a 2d array into another 2d arrays

I am trying make a falling sand simulation game something like Noita.
and I need to figure out how they divided Noita's world into chunks 'since that's what the devs said in theyr gdc talk for optimizing the world' and I am not sure on how they did that.
I also found this person called "MARF" on youtube:
https://www.youtube.com/watch?v=5Ka3tbbT-9E&list=PLn0iQu83pjfaNwbpEGbn0orKQHbKeD81p&index=2&t=1140s&ab_channel=MARF
in "17:35" he says that he divided the world into chunks and He didn't even show how he did it.
so does someone know how to do this?
I thought of using the Array.copy function to copy every 16 x 16 bit into a new array and then I add it into a List of 2d arrays and then Updating everychunk needed. but this method is supported for only 1d arrays. :/
I alsow tried making 16 x 16 worlds as chunks and then offsetting them.. I couldn't offset the arrays properly and it would be a nightmare to figure out how to move elements between chunks.
You are asking the wrong questions.
Please look up "quad trees" on google.
Not great for moving objects, but good for static 2D worlds.
The idea of a chunk is to divide the word into smaller parts.
By dividing the screen into smaller units as an array for moving objects, collisions can be limited to those array values near the object in question.
A chunk is simply an array where the dimensions are reduced by a fixed value. For example, 32x32 pixels is represented by a single element in the array.

Is Dictionary.ContainsKey() any better than FirstOrDefault()?

I know, nothing one million of anything's gonna be performant. But I'm needing that piece o' knowledge right now.
I have a Dictionary and a string[]. The boolean in the dictionary is just to fill the space. Let's imagine that as an Inventory System just to make things easier.
In this inventory, I wanna check if I already had gotten one item. So what I'd do is:
if (dic.ContainsKey(item_id)) // That could be a TryGetValue() as well.
{
// Do some logic.
}
But would it be better to just have an array?
if (array.FirstOrDefault(a => a = item_id))
{
// Do magic.
}
I mean, which would perform better in that specific case?
I know, that's a silly question, but when you can have over one million (or over nine thousand, for the DBZ fans out there xD) checks, things can get pretty heavy, especially for mobile, VR and others with similar performance.
Plus, I just want my users to have the best experience with my Inventory (a.k.a. no lag), so I often take stuff like that in consideration.
There are two tradeoffs here space and time.
A Dictionary is a relatively heavy weight structure compared to an array.
The lookup time in a Dictionary (or a HashSet) if basically independant of the number of entries O(1), while with the array it increases linearly O(N).
So there is a certain number of items where the Dictionary (or HashSet) begins to be considerably faster. And 1 million is certainly above this threshold.

Arrays VS Lists for massive multi-structure data

For a project I'm remaking Minecraft's voxel blocky terran.
Currently im using a 3 key array(new Block[,,]) and then i can reference a block using its cords like BlockList[x,y,z].BlockID and stuff like that.
But i want infinite terrain which isn't possible with a array. So would a List be better for this?
Keep in mind there are ~200k blocks loaded at any given time - I am
afraid of looping through each block in the list to find the block
requested would be heavy on CPU.

Optimising movement on hex grid

I am making a turn based hex-grid game. The player selects units and moves them across the hex grid. Each tile in the grid is of a particular terrain type (eg desert, hills, mountains, etc) and each unit type has different abilities when it comes to moving over the terrain (e.g. some can move over mountains easily, some with difficulty and some not at all).
Each unit has a movement value and each tile takes a certain amount of movement based on its terrain type and the unit type. E.g it costs a tank 1 to move over desert, 4 over swamp and cant move at all over mountains. Where as a flying unit moves over everything at a cost of 1.
The issue I have is that when a unit is selected, I want to highlight an area around it showing where it can move, this means working out all the possible paths through the surrounding hexes, how much movement each path will take and lighting up the tiles based on that information.
I got this working with a recursive function and found it took too long to calculate, I moved the function into a thread so that it didn't block the game but still it takes around 2 seconds for the thread to calculate the moveable area for a unit with a move of 8.
Its over a million recursions which obviously is problematic.
I'm wondering if anyone has an clever ideas on how I can optimize this problem.
Here's the recursive function I'm currently using (its C# btw):
private void CalcMoveGridRecursive(int nCenterIndex, int nMoveRemaining)
{
//List of the 6 tiles adjacent to the center tile
int[] anAdjacentTiles = m_ThreadData.m_aHexData[nCenterIndex].m_anAdjacentTiles;
foreach(int tileIndex in anAdjacentTiles)
{
//make sure this adjacent tile exists
if(tileIndex == -1)
continue;
//How much would it cost the unit to move onto this adjacent tile
int nMoveCost = m_ThreadData.m_anTerrainMoveCost[(int)m_ThreadData.m_aHexData[tileIndex].m_eTileType];
if(nMoveCost != -1 && nMoveCost <= nMoveRemaining)
{
//Make sure the adjacent tile isnt already in our list.
if(!m_ThreadData.m_lPassableTiles.Contains(tileIndex))
m_ThreadData.m_lPassableTiles.Add(tileIndex);
//Now check the 6 tiles surrounding the adjacent tile we just checked (it becomes the new center).
CalcMoveGridRecursive(tileIndex, nMoveRemaining - nMoveCost);
}
}
}
At the end of the recursion, m_lPassableTiles contains a list of the indexes of all the tiles that the unit can possibly reach and they are made to glow.
This all works, it just takes too long. Does anyone know a better approach to this?
As you know, with recursive functions you want to make the problem as simple as possible. This still looks like it's trying to bite off too much at once. A couple thoughts:
Try using a HashSet structure to store m_lPassableTiles? You could avoid that Contains condition this way, which is generally an expensive operation.
I haven't tested the logic of this in my head too thoroughly, but could you set a base case before the foreach loop? Namely, that nMoveRemaining == 0?
Without knowing how your program is designed internally, I would expect m_anAdjacentTiles to contain only existing tiles anyway, so you could eliminate that check (tileIndex == -1). Not a huge performance boost, but makes your code simpler.
By the way, I think games which do this, like Civilization V, only calculate movement costs as the user suggests intention to move the unit to a certain spot. In other words, you choose a tile, and it shows how many moves it will take. This is a much more efficient operation.
Of course, when you move a unit, surrounding land is revealed -- but I think it only reveals land as far as the unit can move in one "turn," then more is revealed as it moves. If you choose to move several turns into unknown territory, you better watch it carefully or take it one turn at a time. :)
(Later...)
... wait, a million recursions? Yeah, I suppose that's the right math: 6^8 (8 being the movements available) -- but is your grid really that large? 1000x1000? How many tiles away can that unit actually traverse? Maybe 4 or 5 on average in any given direction, assuming different terrain types?
Correct me if I'm wrong (as I don't know your underlying design), but I think there's some overlap going on... major overlap. It's checking adjacent tiles of adjacent tiles already checked. I think the only thing saving you from infinite recursion is checking the moves remaining.
When a tile is added to m_lPassableTiles, remove it from any list of adjacent tiles received into your function. You're kind of doing something similar in your line with Contains... what if you annexed that if statement to include your recursive call? That should cut your recursive calls down from a million+ to... thousands at most, I imagine.
Thanks for the input everyone. I solved this by replacing the Recursive function with Dijkstra's Algorithm and it works perfectly.

Can looking up object parameter in a large object array take too long? (C#)

I'm making a 2d tile-based game. Some of my testing levels have few tiles, some have 500,000 for testing purposes. Running a performance profiler that comes with visual studio shows the bottleneck:
What is exact reason why it takes so much time? How do I avoid such situations?
UPD: nevermind, I'm just going through the whole array instead of going only through the ~200 visible tiles.
It's not that evaluating each individual Visibility property/field would take much time - it's that you are doing it 500k times - it adds up. Since this is what the profiler estimates is the bottleneck, it means that the vast majority of items have Visibility set to false - otherwise you would assume the Draw() method call would be shown as bottleneck. One approach to optimize this could be separating the visible items and only iterating over those.
.Visible is a property. To understand why this is a bottleneck, you need to look at the implementation of that getter. While it looks like just a simple Boolean, it could be complicated behind the scenes.
Is 500k a typical scenario? Keep in mind that depending on your real world needs, the complexity varies a lot.
If you really have to deal with 500k tiles, I'd have a second List of visible tiles, or a list of ints that contain the array indexes of those visible tiles.
A sequential check for finding objects with a certain property turned on means a worst case scenario of running 'n' iterations where 'n' is the total number of records. This is certainly a bottleneck.
You may want to keep a collection of visible items separately.

Categories

Resources