Most efficient algorithm for navigating a 3D array? - c#

TLDR: I need to average the values of all the surrounding coordinates for a specific coordinate for every coordinate in a 3D array
I am making a simplistic weather simulating program in Godot C# and have run into a few problems along the way.
One of the biggest problems I have encountered is performance, along with a few other things. To simulate air flow, I have a 3D array containing direction (Vector3 objects) for each coordinate. To simulate the airflow, I set each voxel’s direction to the average of the directions of the surrounding voxels.
Each voxel has a pressure value, and a voxel transfers pressure scaled by the magnitude (speed) of the wind direction to the voxels the wind direction is pointing to. For example if a voxel at (x,y,x) has a direction of (1,1,1), the voxel at (x+1,y+1,z+1) will have its pressure set to (x+1,y+1,z+1).pressure + direction.project(Vector3(x+1,y+1,z+1)).length() * (x,y,z).pressure
A voxel will also add Vector3’s pointing towards neighboring voxels if the pressure difference is not 0. The length of these vectors will be scaled by the pressure difference.
There are a bunch of other properties that need to be averaged up such as temperature, humidity, density, etc.
The real issue is iterating through a 3D array in a way that is fast, very fast. The method I am using at the moment has six nested for loops: Three for iterating over each voxel in the array, and three for iterating over the neighboring voxels within a range of -1 to 1 in each direction. I want to simulate a 16x16x16 area, but this algorithm requires me to do 16x16x16x27 iterations every game tick, rendering the application unplayable.
Is there a more efficient way of doing this?

First, make sure your nested loops are accessing memory in order of the memory's physical location. A 3D array is just a 1D array with some extra strides and offsets that occur behind the scenes. If your array is organized as voxels[x, y, z], then your final nested loop should be the z loop. If you prefer to access voxels in a different pattern, then you can populate your array differently to accommodate the other pattern, such as voxels[z, y, x].
Second, try to eliminate repeat access of array locations on different loops. If you access a location on one loop and you plan to access it again on the next, try saving the info to a local variable and test to see if it improves speed.
Third, see if you can reduce the number of calculations. For example, if you are calculating an average of all variables in a 3x3x3 grid and then incrementing the location and repeating, you can try saving a running sum. Then on the next loop subtract the 3x3 section you no longer need, add the new 3x3 section, and then divide to get your new average.
On a side note, maybe your example is truncated or conceptual, but it doesn't look like you are actually averaging the values of all surrounding voxels. Each voxel will be surrounded by a 3x3x3 array of voxels (so 26 adjacent voxels when excluding the center voxel). If we exclude corners, there are still 6 adjacent voxels.

Related

Indoor path-finding c# wpf

I am currently developing an indoor path-finding. I have multiple floors and different rooms. How will I be able to implement a* algorithm in the images of each floor using c# wpf?
I use spatial A* for the game I'm working on.
Spatial A* uses movement "cost" to work out the best route between two points. The cost mentions is supplied by an array. Usually a 2d array of number - float uint or whatever.
Moving through a square/cell at position x,y thus costs the number in that 2d array. EG costs[2,3] would be the cost of movement through the cell 2 cells across from the left and 3 down from the top of an imaginary grid projected onto your "room".
If the move is diagonal then there's also a multiplier to consider but that will be in whichever implementation you go with.
Hence you need a 2d costed array per floor.
You would need to somehow analyse your pictures and work out an appropriate size for a costed cell. This should match the smallest size of a significant piece of terrain in your floor.
You would then translate your picture into a costed array. You've not told us anywhere near enough to tell you specifically how to do that. Maybe that would have to be a manual process though.
Blocked cells get the max number, empty cells get 1. Depending on your requirements that might be that. Or alternatively you might have actors leaping tables and chairs etc.
You give the pathing algorithm start and target location (x,y), the appropriate costed array and it works out the cheapest route.

Simulate depressurization in a discrete room

I am trying to build a top down view spaceship game which has destructible parts. I need to simulate the process of depressurization in case of hull breach.
I have a tiled map which has the room partitioning code setup:
What I am trying to do is build some kind of a vector field which would determine the ways the air leaves depressurized room. So in case you would break the tile connecting the vacuum and the room (adjacent to both purple and green rooms), you'd end up with a vector map like this:
My idea is to implement some kind of scalar field (kind of similar to a potential field) to help determine the airflow (basically fill the grid with euclidean distances (taking obstacles into account) to a known zero-potential point and then calculate the vectors by taking into account all of the adjacent tiles with lower potential value that the current tile has:
However this method has a flaw to where the amount of force applied to a body in a certain point doesn't really take airflow bottlenecks and distance into account, so the force whould be the same in the tile next to vacuum tile as well as on the opposite end of the room.
Is there a better way to simulate such behavior or maybe a change to the algorithm I though of that would more or less realistically take distance and bottlenecks into account?
Algorithm upgrade ideas collected from comments:
(...) you want a realistic feeling of the "force" in this context, then it should be not based just on the distance, but rather, like you said, the airflow. You'd need to estimate it to some degree and note that it behaves similar to Kirchoff rule in electronics. Let's say the hole is small - then amount-of-air-sucked-per-second is small. The first nearest tile(s) must cover it, they lose X air per second. Their surrounding tiles also must conver it - they lose X air per second in total. And their neighbours.. and so on. That it works like Dijkstra distance but counting down.
Example: Assuming no walls, start with 16/sec at point-zero directing to hole in the ground, surrounding 8 tiles will get 2/sec directed to the point-zero tile. next layer of surrounding 12 tiles will get something like 1.33/sec and so on. Now alter that to i.e. (1) account for various initial hole sizes (2) various large no-pass-through obstacles (3) limitations in air flow due to small passages - which behave like new start points.
Another example (from the map in question): The tile that has a value of zero would have a value of, say, 1000 units/s. the ones below it would be 500/s each, the next one would be a 1000/s as well, the three connected to it would have 333/s each.
After that, we could base the coefficient for the vector on the difference of this scalar value and since it takes obstacles and distance into account, it would work more or less realistically.
Regarding point (3) above, imagine that instead of having only sure-100%-pass and nope-0%-wall you also have intermediate options. Instead of just a corridor and a wall you can also have i.e. broken window with 30% air pass. For example, at place on the map with distance [0] you've got the initial hole that generates flux 1000/sec. However at distance [2] there is a small air vent or a broken window with 30% air flow modifier. It means that it will limit the amount from incoming (2x500=1000) to 0.3x(2x500)=300/sec that will now flow further to the next areas. That will allow you to depressurize compartments with different speeds so the first few tiles will lose all air quickly and the rest of the deck will take some more time (unless the 30%-modifier window at point [2] breaks completely, etc).

C# List or array techniques to save memory occupation

So I have a simulation I run that has 3 lists of about 200k objects. Each object holds information about a point(x,y,z) and contains an array of x objects.
Depending on the amount of frames (of the animation) in the simulation, each Point object holds an array of say 64 values.
The whole simulation takes up about 11 gigabytes of RAM. This is too much for a lot of our users. Therefore i would like to know if there are smart ways to save memory usage in this case.
At least 60% of the arrays within these points hold the value 0. I have thought of ways to use lists as pointers so that the same value (for example 0) doesn't get saved in memory a few million times, like I would in C++. However I cannot come up with an implementation for this in C#.
Any tips or tricks to reduce the 11 gigabytes of RAM the simulation occupies are well appreciated!
The Point object as described above is:
public class Vertex {
public Point3D position;
public float[] delta;
}
There is also 3 meshes. Those hold the objects in:
public List<Vertex> Vertices;
The way the arrays are filled is with another list of points, roughly aligned with the mesh, but not exactly with each vertex, the position of these points change every frame, so each frame, each point has to assign a fraction of its value to each of the nearby vertices. So filling the vertices is not as straight forward as it looks.
I currently approached this by initializing the array with a size of e.g. 64. And then depending on whether there are any close points, a value (or an average of multiple) is assigned to the position of the current frame in that array.
Using the approach of #kookiz might be possible but will be harder than it looks on the surface.

Optimising movement on hex grid

I am making a turn based hex-grid game. The player selects units and moves them across the hex grid. Each tile in the grid is of a particular terrain type (eg desert, hills, mountains, etc) and each unit type has different abilities when it comes to moving over the terrain (e.g. some can move over mountains easily, some with difficulty and some not at all).
Each unit has a movement value and each tile takes a certain amount of movement based on its terrain type and the unit type. E.g it costs a tank 1 to move over desert, 4 over swamp and cant move at all over mountains. Where as a flying unit moves over everything at a cost of 1.
The issue I have is that when a unit is selected, I want to highlight an area around it showing where it can move, this means working out all the possible paths through the surrounding hexes, how much movement each path will take and lighting up the tiles based on that information.
I got this working with a recursive function and found it took too long to calculate, I moved the function into a thread so that it didn't block the game but still it takes around 2 seconds for the thread to calculate the moveable area for a unit with a move of 8.
Its over a million recursions which obviously is problematic.
I'm wondering if anyone has an clever ideas on how I can optimize this problem.
Here's the recursive function I'm currently using (its C# btw):
private void CalcMoveGridRecursive(int nCenterIndex, int nMoveRemaining)
{
//List of the 6 tiles adjacent to the center tile
int[] anAdjacentTiles = m_ThreadData.m_aHexData[nCenterIndex].m_anAdjacentTiles;
foreach(int tileIndex in anAdjacentTiles)
{
//make sure this adjacent tile exists
if(tileIndex == -1)
continue;
//How much would it cost the unit to move onto this adjacent tile
int nMoveCost = m_ThreadData.m_anTerrainMoveCost[(int)m_ThreadData.m_aHexData[tileIndex].m_eTileType];
if(nMoveCost != -1 && nMoveCost <= nMoveRemaining)
{
//Make sure the adjacent tile isnt already in our list.
if(!m_ThreadData.m_lPassableTiles.Contains(tileIndex))
m_ThreadData.m_lPassableTiles.Add(tileIndex);
//Now check the 6 tiles surrounding the adjacent tile we just checked (it becomes the new center).
CalcMoveGridRecursive(tileIndex, nMoveRemaining - nMoveCost);
}
}
}
At the end of the recursion, m_lPassableTiles contains a list of the indexes of all the tiles that the unit can possibly reach and they are made to glow.
This all works, it just takes too long. Does anyone know a better approach to this?
As you know, with recursive functions you want to make the problem as simple as possible. This still looks like it's trying to bite off too much at once. A couple thoughts:
Try using a HashSet structure to store m_lPassableTiles? You could avoid that Contains condition this way, which is generally an expensive operation.
I haven't tested the logic of this in my head too thoroughly, but could you set a base case before the foreach loop? Namely, that nMoveRemaining == 0?
Without knowing how your program is designed internally, I would expect m_anAdjacentTiles to contain only existing tiles anyway, so you could eliminate that check (tileIndex == -1). Not a huge performance boost, but makes your code simpler.
By the way, I think games which do this, like Civilization V, only calculate movement costs as the user suggests intention to move the unit to a certain spot. In other words, you choose a tile, and it shows how many moves it will take. This is a much more efficient operation.
Of course, when you move a unit, surrounding land is revealed -- but I think it only reveals land as far as the unit can move in one "turn," then more is revealed as it moves. If you choose to move several turns into unknown territory, you better watch it carefully or take it one turn at a time. :)
(Later...)
... wait, a million recursions? Yeah, I suppose that's the right math: 6^8 (8 being the movements available) -- but is your grid really that large? 1000x1000? How many tiles away can that unit actually traverse? Maybe 4 or 5 on average in any given direction, assuming different terrain types?
Correct me if I'm wrong (as I don't know your underlying design), but I think there's some overlap going on... major overlap. It's checking adjacent tiles of adjacent tiles already checked. I think the only thing saving you from infinite recursion is checking the moves remaining.
When a tile is added to m_lPassableTiles, remove it from any list of adjacent tiles received into your function. You're kind of doing something similar in your line with Contains... what if you annexed that if statement to include your recursive call? That should cut your recursive calls down from a million+ to... thousands at most, I imagine.
Thanks for the input everyone. I solved this by replacing the Recursive function with Dijkstra's Algorithm and it works perfectly.

On a 3D Terrain, Given a 3D Line, Find the Intersection Point Between the Line and the Terrain

I have a grid of 3D terrain, where each of the coordinate (x,y,z) of each grid are known. Now, I have a monotonously increasing/ decreasing line, which its start point is also known. I want to find the point where the terrain and the line meets. What is the algorithm to do it?
What I can think of is to store the coordinate of the 3D terrain in a nxn matrix. And then I would segmentize the line based on the grid in the terrain. I would then start with the grid that is the nearest to the line, and then try to compute whether that plane intersects with the line, if yes, then get the coordinate and exit. If no, then I would proceed to the next segment.
But is my algorithm the best, or the most optimum solution? Or is there any existing libraries that already do this?
A different approach would be to triangulate the terrain grid to produce a set of facets and then intersect the line with those.
Obviously you'd need to do some optimisations like only checking those facets that intersect the bounding box of the line. You can do a quite cheap/quick facet bounding box to line bounding box check which will discount most of the triangles in the terrain very quickly.
If you arrange your triangles in to an octree (as #sum1stolemyname suggested but for the points) then this checking can be done from the "top down" and you should be able to discount whole sections of the the terrain with a single calculation.
Not directly and optimisation, just a few hints:
If your grid is large, it might be worthwhile to build an octree from your terrain in order to quickly reduce the number of grid nodes you have to check your line against. This can be more efficient in a huge grid( like 512*512 ndoes) since only the leafnodes your ray is passing through have to be considered.
Additionally, the Octree can be used as a means to decide wich parts of your grid are visible and therefore have to be drawn, by checking which leave-nodes are in the viewing frustum.
There is a catch, though: building the Octree has to be done in advance, taking some time, and the tree is static. It can not be easyly modified after it has been constructes, since a modification in one node might affect several other nodes, not necessarily adjacent ones.
However, if you do not plan to modify your grid once it is created an octree will be helpful.
UPDATE
Now that i understand how you are planning to store your grid, i believe space partitioning will be an efficent way to find the nearest neighbour of the intersection line.
Finding the nearest Neighbour linearly has a runtime complexity of O(N), while space-partitioning appoaches have an average runtime complexity if O(log N).
If the terrain is not built via a nice function you will have to do a ray trace, i.e. traverse the line step by step in order to find an intersection. This procedure can take some time.
There are several parameters for the procedure. E.g. there is the offset you walk alogn the line in each step. If you take an offset too large, you might leave out some "heights" of your terrain and thus not get the correct intersection. If the offset is to small, it will slow down your procedure.
However, there is a nice trick to save time. It's described here resp. here. It uses some kind of optimization structure for the terrain, i.e. it builds several levels of details the following way: The finest level of detail is just the terrain itself. The next (coarser) level of detail contains just a forth of the number of original "pixels" in the terrain texture and combines 4 pixels into one, taking the maximum. The next level of detail is constructed analoguesly:
. . . .
... . ... .. .
....... .... .. .
........ => .... => .. => .
01234567 0246 04 0
1357 26 4
fine => => => => => coarse
If now the ray cast is performed, first of all, the coarser levels of detail are checked:
/
/
/.
.
.
.
If the ray already misses the coarse level of detail, no finer level has to be examined. That's just a very rough idea how the optimisation works. But it works quite well. Implementing it is quite a bunch of work, but the paper is a good help.

Categories

Resources