Basically I am using a MeshGeometry3D to load points, positions and normals from an STL file.
The STL file format duplicates points, so I want to first search the MeshGeometry3D.Positions for duplicate before adding the newly read point.
The Mesh.Positions.IndexOf(somePoint3D) does not work, because it compares based on the object reference rather than the X, Y, Z values of the Point3D. This is why I am iterating the entire Collection to manually find duplicates:
//triangle is a custom class containing three vertices of type Point3D
//for each 3 points read from STL the triangle object is reinitialized
vertex1DuplicateIndex = -1;
vertex2DuplicateIndex = -1;
vertex3DuplicateIndex = -1;
for (int q = tempMesh.Positions.Count - 1; q >= 0; q--)
{
if (vertex1DuplicateIndex != -1)
if (tempMesh.Positions[q] == triangle.Vertex1)
vertex1DuplicateIndex = q;
if (vertex2DuplicateIndex != -1)
if (tempMesh.Positions[q] == triangle.Vertex2)
vertex2DuplicateIndex = q;
if (vertex3DuplicateIndex != -1)
if (tempMesh.Positions[q] == triangle.Vertex3)
vertex3DuplicateIndex = q;
if (vertex1DuplicateIndex != -1 && vertex2DuplicateIndex != -1 && vertex3DuplicateIndex != -1)
break;
}
This code is actually very efficient when duplicates are found, but when there is no duplicate the collection is iterated entirely which is very slow for big meshes, with more than a million positions.
Is there another approach on the search?
Is there a way to force Mesh.Positions.IndexOf(newPoint3D) to compare based on value like the Mesh.Positions[index]==(somePoint3D), rather than the reference comparison it is doing now?
I don't know of a built in way to do this, but you could use a hash map to cache the indices of the 3D vectors.
Depending of the quality of your hash functions for the vectors you'll have a 'sort of' constant lookup (no collisions are impossible, but it should be faster than iterating though all the vertex data for each new triangle point).
Using hakononakani's idea I've managed to speed up a bit, using a combination of a HashSet and a Dictionary. The following is a simplified version of my code:
class CustomTriangle
{
private Vector3D normal;
private Point3D vertex1, vertex2, vertex3;
}
private void loadMesh()
{
CustomTriangle triangle;
MeshGeometry3D tempMesh = new MeshGeometry3D();
HashSet<string> meshPositionsHashSet = new HashSet<string>();
Dictionary<string, int> meshPositionsDict = new Dictionary<string, int>();
int vertex1DuplicateIndex, vertex2DuplicateIndex, vertex3DuplicateIndex;
int numberOfTriangles = GetNumberOfTriangles();
for (int i = 0, j = 0; i < numberOfTriangles; i++)
{
triangle = ReadTriangleDataFromSTLFile();
vertex1DuplicateIndex = -1;
if (meshPositionsHashSet.Add(triangle.Vertex1.ToString()))
{
tempMesh.Positions.Add(triangle.Vertex1);
meshPositionsDict.Add(triangle.Vertex1.ToString(), tempMesh.Positions.IndexOf(triangle.Vertex1));
tempMesh.Normals.Add(triangle.Normal);
tempMesh.TriangleIndices.Add(j++);
}
else
{
vertex1DuplicateIndex = meshPositionsDict[triangle.Vertex1.ToString()];
tempMesh.TriangleIndices.Add(vertex1DuplicateIndex);
tempMesh.Normals[vertex1DuplicateIndex] += triangle.Normal;
}
//Do the same for vertex2 and vertex3
}
}
At the end tempMesh will have only unique points. All you have to do is normalize all Normals and you're ready to visualize.
The same can be achieved only with the Dictionary using:
if (!meshPositionsDict.Keys.Contains(triangle.Vertex1.ToString()))
I just like using the HashSet, because it's fun to work with :)
In both cases the final result is a ~60 times faster algorithm than before!
Related
Right now, I'm working on a QuadTree LOD system for planets. In general everything is working quite nice.
As a rough description of how the mesh is generated:
Every node generated by the QuadTree contains a container called "NodeMeshData". This contains all information to generate a simple Mesh
public class NodeMeshData
{
public Vector3[] vertices;
public int[] indices;
public Vector2[] uvs;
public void Clear()
{
vertices = null;
indices = null;
uvs = null;
}
}
Whenever the mesh needs to be regenerated, all QuadTree nodes without any children (leaf nodes) are asked for there NodeMeshData. All these are put into one Array of NodeMeshData objects.
public void UpdateQuadTree(Vector3 playerPosition)
{
_quadTree.UpdateTree(playerPosition);
NodeMeshData[] data = _quadTree.GetNodeMeshData().ToArray();
CombinedMeshData.Combine(data);
}
(CombinedMeshData is a NodeMeshData property from the class where this method is)
The problem now lies in the combining of all these NodeMeshData objects into one to generate the final mesh - the extention method "Combine". By process of elimination I found out that it must be this method that causes the problem I have.
public static class NodeMeshHelper
{
public static void Combine(this NodeMeshData newData, IList<NodeMeshData> nodeMeshData)
{
int verticesCount = nodeMeshData.Sum(static nmd => nmd.vertices.Length);
int indicesCount = nodeMeshData.Sum(static nmd => nmd.indices.Length);
int uvsCount = nodeMeshData.Sum(static nmd => nmd.uvs.Length);
List<Vector3> vertices = new(verticesCount);
List<int> indices = new(indicesCount);
List<Vector2> uvs = new(uvsCount);
int lastIndex = 0;
foreach (var meshData in nodeMeshData)
{
vertices.AddRange(meshData.vertices);
int[] shiftedIndices = meshData.indices.Select(index => index + lastIndex).ToArray();
lastIndex += meshData.indices.Last() + 1;
indices.AddRange(shiftedIndices);
uvs.AddRange(meshData.uvs);
}
newData.vertices = vertices.ToArray();
newData.indices = indices.ToArray();
newData.uvs = uvs.ToArray();
}
}
Whenever it is called and executed, the RAM usage rises. After a few seconds, the garbage collector kicks in and cleans the memory of all the unused data as it is supposed to do. But this results in a nasty frame drop every single time.
My idea is that all these declarations of new Lists or the AddRange calls cause the rising RAM usage. From asking my collegues I found out, that declaring Lists with a fixed size should eliminate the internal redeclarations of Arrays within the List objects when AddRange is called. But as you can see, I already do that and it doesn't help. I also tried to change the NodeMeshHelper into a non static class with fields for the lists or the final NodeMeshData result. It didn't help either.
What can I do to fix this? Is there a simple way, e.g. changing the "Combine" method in a way that it doesn't use new memory every single time? Or do I need to rethink my QuadTree or mesh creation algorithm completely (I hope not)? What about compute shaders?
Whoever is interested in the full code:
Github
UPDATE
I changed the NodeMeshHelper to not longer use Lists. Now it uses Arrays. I did that because it seems to improve the performance a bit. Also I hope that it might go in the right direction to fix my major problem. Yes, I know there still are "new" statements for the arrays.
public static class NodeMeshHelper
{
public static void Combine(this NodeMeshData newData, NodeMeshData[] nodeMeshData)
{
int vertexIterator = 0;
int indexIterator = 0;
int uvIterator = 0;
int verticesCount = 0;
foreach (var nmd in nodeMeshData)
verticesCount += nmd.vertices.Length;
int indicesCount = 0;
foreach (var nmd in nodeMeshData)
indicesCount += nmd.indices.Length;
int uvsCount = 0;
foreach (var nmd in nodeMeshData)
uvsCount += nmd.uvs.Length;
Vector3[] vertices = new Vector3[verticesCount];
int[] indices = new int[indicesCount];
Vector2[] uvs = new Vector2[uvsCount];
int lastIndex = 0;
foreach (var meshData in nodeMeshData)
{
// vertices
foreach (Vector3 vertex in meshData.vertices)
{
vertices[vertexIterator] = vertex;
vertexIterator++;
}
// indices
foreach (int index in meshData.indices)
{
indices[indexIterator] = index + lastIndex;
indexIterator++;
}
lastIndex += meshData.indices[^1] + 1;
// uvs
foreach (Vector2 uv in meshData.uvs)
{
uvs[uvIterator] = uv;
uvIterator++;
}
}
newData.vertices = vertices;
newData.indices = indices;
newData.uvs = uvs;
}
}
UPDATE 2
I also tried to implement this in Unitys Job System. But unfortunately it does not support NativeArrays of NativeArrays, which (in my brain) is necessary for my purposes.
I'm trying to create some pairs of unique numbers using pretty simple algorithm.
For some unknown reason after compiling Unity goes into an endless "not responding" state. Seems like it's stuck in a do..while loop, but I don't see any reason for that.
//Creating two lists to store random numbers
List<int> xList = new List<int>();
List<int> yList = new List<int>();
int rx, ry;
for(int i = 0; i < 10; i++)
{
// look for numbers until they are unique(while they are in lists)
do
{
rx = rand.Next(0, width);
ry = rand.Next(0, height);
}
while(xList.Contains(rx) || yList.Contains(ry));
//add them to lists
xList.Add(rx);
yList.Add(ry);
Debug.Log(rx + ", " + ry);
// some actions with these numbers
gridArray[rx,ry].isBomb = true;
gridArray[rx,ry].changeSprite(bombSprite);
}
As mentioned the issue is that once all unique numbers have been used once you are stuck in the do - while loop.
Instead you should rather simply
generate the plain index lists for all possible pairs.
I will use the Unit built-in type Vector2Int but you could do the same using your own struct/class
For each bomb to place pick a random entry from the list of pairs
Remove according random picked item from the pairs so it is not available anymore in the next go
Something like
// create the plain pair list
var pairs = new List<Vector2Int>(width * height);
for(var x = 0; x < width; x++)
{
for(var y = 0; y < height; y++)
{
pairs.Add(new Vector2Int(x,y));
}
}
// so now you have all possible permutations in one list
if(pairs.Count < BOMB_AMOUNT_TO_PLACE)
{
Debug.LogError("You are trying more bombs than there are fields in the grid!");
return;
}
// Now place your bombs one by one on a random spot in the grid
for(var i = 0; i < BOMB_AMOUNT_TO_PLACE; i++)
{
// now all you need to do is pick one random index from the possible entries
var randomIndexInPairs = Random.Range(0, pairs.Count);
var randomPair = pairs[randomIndexInPairs];
// and at the same time remove the according entry
pairs.RemoveAt(randomIndexInPairs);
// Now you have completely unique but random index pairs
var rx = randomPair.x;
var ry = randomPair.y;
gridArray[rx, ry].isBomb = true;
gridArray[rx, ry].changeSprite(bombSprite);
}
Depending on your use-case as alternative to generate the pairs list and then remove entries again you could also generate it once and then use
if(pairs.Count < BOMB_AMOUNT_TO_PLACE)
{
Debug.LogError("You are trying more bombs than there are fields in the grid!");
return;
}
var random = new System.Random();
var shuffledPairs = pairs.OrderBy(e => random.Next());
for(var i = 0; i < BOMB_AMOUNT_TO_PLACE; i++)
{
// then you can directly use
var randomPair = shuffledPairs[i];
// Now you have completely unique but random index pairs
var rx = randomPair.x;
var ry = randomPair.y;
gridArray[rx, ry].isBomb = true;
gridArray[rx, ry].changeSprite(bombSprite);
}
Although your algorithm is maybe not the best way to generate ten bombs in a grid, it should work.
The problem is that your while condition is using a OR statement, which means that if you have a bomb in the first line (in any column), it will not be able to add another bomb in that line.
Therefore you will pretty soon end up with an infinite loop because for every bomb you lock the line and column.
If you put an AND condition, you make sure the pair is unique because you lock only that cell.
Provided of course that width x height is more than ten.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I'm creating an algorithm to solve sudoku's using Constraint Propagations and Local Search ( similar to Norvig's ). For this I keep track of lists of possible values for each of the squares in the sudoku. For each attempt to try to assign a new value to a square I clone the array and pass it to the algorithm's search method recursively. However, somehow this the list is still being altered. The method it concerns is:
List<int>[,] search(List<int>[,] vals)
{
if (vals == null)
return null;
if (isSolved(vals))
return vals;
//get square with lowest amt of possible values
int min = Int32.MaxValue;
Point minP = new Point();
for (int y = 0; y < n * n; y++)
{
for (int x = 0; x < n * n; x++)
{
if (vals[y, x].Count < min && vals[y, x].Count > 1)
{
min = vals[y, x].Count;
minP = new Point(x, y);
}
}
}
for(int i=vals[minP.Y, minP.X].Count-1; i >= 0; i--)
{
//<Here> The list vals[minP.Y, minP.X] is altered within this for loop somehow, even though the only thing done with it is it being cloned and passed to the assign method for another (altered) search afterwards
Console.WriteLine(vals[minP.Y, minP.X].Count + "-" + min);
int v = vals[minP.Y, minP.X][i];
List<int>[,] newVals = (List<int>[,])vals.Clone();
List<int>[,] l = search(assign(minP, v, newVals));
if (l != null)
return l;
}
return null;
}
The list vals[minP.Y, minP.X] is somehow altered within the for loop which causes it to eventually try to pass squares to the assign method that have 1 (or eventually even 0) possible values. The Console.Writeline statement shows that the vals[minP.Y, minP.X].Count will eventually differ from the 'min' variable (which is defined as the same above the for loop).
If anyone could help me out on how the list is altered within this for loop and how to fix it it'd be much appreciated!
Best regards.
EDIT: The methods in which these lists are edited (in a cloned version however):
List<int>[,] assign(Point p, int v, List<int>[,] vals)
{
int y = p.Y, x = p.X;
for (int i = vals[y, x].Count - 1; i >= 0; i--)
{
int v_ = vals[y, x][i];
if (v_ != v && !eliminate(p, v_, vals))
return null;
}
return vals;
}
bool eliminate(Point p, int v, List<int>[,] vals)
{
if (!vals[p.Y, p.X].Remove(v))
return true; //al ge-elimineerd
// Update peers when only 1 possible value left
if (vals[p.Y, p.X].Count == 1)
foreach (Point peer in peers[p.Y, p.X])
if(!eliminate(peer, vals[p.Y, p.X][0], vals))
return false;
else if (vals[p.Y, p.X].Count == 0)
return false;
// Update units
List<Point> vplaces = new List<Point>();
foreach (Point unit in units[p.Y, p.X])
{
if (vals[unit.Y, unit.X].Contains(v))
{
vplaces.Add(unit);
if (vplaces.Count > 1)
continue;
}
}
if (vplaces.Count == 0)
return false;
else if (vplaces.Count == 1)
{
Console.WriteLine("test");
if (assign(vplaces[0], v, vals) == null)
return false;
}
return true;
}
Your problem is with
List<int>[,] newVals = (List<int>[,])vals.Clone();
Array.Clone() doesn't do what you think it does here. List<int>[,] represents a two-dimensional Array of List<int> objects - effectively a three-dimensional array. Since List<int> isn't a basic value type, .Clone() creates a shallow copy of the array.
In other words, it creates a brand new two-dimensional Array which has, for each value, a pointer to the same List<int> that the old one does. If C# let you manipulate pointers directly, you could start changing those, but since it doesn't, any time you access the underlying List<int>, you're getting the same one regardless of whether it's before the Clone() or after.
See the documentation on it here, and some solutions are here and here.
Effectively, you need to rewrite that line so that rather than copying the array itself, it copies all the values into new List<int>'s.
I have a method that returns thousands of points to be displayed in a dygraph section in frontend, it is feed with a list like this:
List<int> pointsResult = GetMyPoints();
This graph is a very small graph where I think only the representative points could be displayed.
What could be the best approach to just get for example 100 values instead of thousands?
This int values can be very regular and only the representative points will need to be displayed.
The graph looks like:
I found this C# implementation
A C# Implementation of Douglas-Peucker Line Approximation
Algorithm
public static List<Point> DouglasPeuckerReduction
(List<Point> Points, Double Tolerance)
{
if (Points == null || Points.Count < 3)
return Points;
Int32 firstPoint = 0;
Int32 lastPoint = Points.Count - 1;
List<Int32> pointIndexsToKeep = new List<Int32>();
//Add the first and last index to the keepers
pointIndexsToKeep.Add(firstPoint);
pointIndexsToKeep.Add(lastPoint);
//The first and the last point cannot be the same
while (Points[firstPoint].Equals(Points[lastPoint]))
{
lastPoint--;
}
DouglasPeuckerReduction(Points, firstPoint, lastPoint,
Tolerance, ref pointIndexsToKeep);
List<Point> returnPoints = new List<Point>();
pointIndexsToKeep.Sort();
foreach (Int32 index in pointIndexsToKeep)
{
returnPoints.Add(Points[index]);
}
return returnPoints;
}
I'm writing some sort of Geometry Wars inspired game except with added 2d rigid body physics Ai pathfinding some waypoint analysis line of sight checks load balancing etc. It seems that even though with around 80-100 enemies on screen it can work reasonably fast with all that stuff enabled the performance completely breaks down once you get to a total of 250 (150 enemies) objects or so. I've searched for any O(n^2) parts in the code but there don't seem to be any left. I'm also using spatial grids.
Even if I disable pretty much everything from the supposedly expensive Ai related processing it doesn't seem to matter, it like still breaks down at 150 enemies.
Now I implemened all the code from scratch, currently even the matrix multiplication code, and I'm almost completely relying on the GC as well as using C# closures for some things, so I expect this to be seriously far from being optimized, but still it doesn't make sense to me that with like 1/15 of the processing work but double the objects the game suddenly starts to slow down to crawl? Is this normal, how is the XNA platform normally supposed to scale as far as the amount of objects being processed is concerned?
I remember Some slerp spinning cube thing I did at first could handle more than 1000 at once so I think I'm doing something wrong?
edit:
Here's the grid structure's class
public abstract class GridBase{
public const int WORLDHEIGHT = (int)AIGridInfo.height;
public const int WORLDWIDTH = (int)AIGridInfo.width;
protected float cellwidth;
protected float cellheight;
int no_of_col_types;
// a dictionary of lists that gets cleared every frame
// 3 (=no_of_col_types) groups of objects (enemy side, players side, neutral)
// 4000 initial Dictionary hash positions for each group
// I have also tried using an array of lists of 100*100 cells
//with pretty much identical results
protected Dictionary<CoordsInt, List<Collidable>>[] grid;
public GridBase(float cellwidth, float cellheight, int no_of_col_types)
{
this.no_of_col_types = no_of_col_types;
this.cellheight=cellheight;
this.cellwidth=cellwidth;
grid = new Dictionary<CoordsInt, List<Collidable>>[no_of_col_types];
for (int u = 0; u < no_of_col_types; u++)
grid[u] = new Dictionary<CoordsInt, List<Collidable>>(4000);
}
public abstract void InsertCollidable(Collidable c);
public abstract void InsertCollidable(Grid_AI_Placeable aic);
//gets called in the update loop
public void Clear()
{
for (int u = 0; u < no_of_col_types; u++)
grid[u].Clear();
}
//gets the grid cell of the left down corner
protected void BaseCell(Vector3 v, out int gx, out int gy)
{
gx = (int)((v.X + (WORLDWIDTH / 2)) / cellwidth);
gy = (int)((v.Y + (WORLDHEIGHT / 2)) / cellheight);
}
//gets all cells covered by the AABB
protected void Extent(Vector3 pos, float aabb_width, float aabb_height, out int totalx, out int totaly)
{
var xpos = pos.X + (WORLDWIDTH / 2);
var ypos = pos.Y + (WORLDHEIGHT / 2);
totalx = -(int)((xpos / cellwidth)) + (int)((xpos + aabb_width) / cellwidth) + 1;
totaly = -(int)((ypos / cellheight)) + (int)((ypos + aabb_height) / cellheight) + 1;
}
}
public class GridBaseImpl1 : GridBase{
public GridBaseImpl1(float widthx, float widthy)
: base(widthx, widthy, 3)
{
}
//adds a collidable to the grid /
//caches for intersection test
//checks if it should be tested to prevent penetration /
//tests penetration
//updates close, intersecting, touching lists
//Collidable is an interface for all objects that can be tested geometrically
//the dictionary is indexed by some simple struct that wraps the row and column number in the grid
public override void InsertCollidable(Collidable c)
{
//some tag so that objects don't get checked more than once
Grid_Query_Counter.current++;
//the AABB is allocated in the heap
var aabb = c.CollisionAABB;
if (aabb == null) return;
int gx, gy, totalxcells, totalycells;
BaseCell(aabb.Position, out gx, out gy);
Extent(aabb.Position, aabb.widthx, aabb.widthy, out totalxcells, out totalycells);
//gets which groups to test this object with in an IEnumerable (from a statically created array)
var groupstestedagainst = CollidableCalls.GetListPrevent(c.CollisionType).Select(u => CollidableCalls.group[u]);
var groups_tested_against = groupstestedagainst.Distinct();
var own_group = CollidableCalls.group[c.CollisionType];
foreach (var list in groups_tested_against)
for (int i = -1; i < totalxcells + 1; i++)
for (int j = -1; j < totalycells + 1; j++)
{
var index = new CoordsInt((short)(gx + i), (short)(gy + j));
if (grid[list].ContainsKey(index))
foreach (var other in grid[list][index])
{
if (Grid_Query_Counter.Check(other.Tag))
{
//marks the pair as close, I've tried only keeping the 20 closest but it's still slow
other.Close.Add(c);
c.Close.Add(other);
//caches the pair it so that checking if the pair intersects doesn't go through the grid //structure loop again
c.CachedIntersections.Add(other);
var collision_function_table_id = c.CollisionType * CollidableCalls.size + other.CollisionType;
//gets the function to use on the pair for testing penetration
//the function is in a delegate array statically created to simulate multiple dispatch
//the function decides what coarse test to use until descending to some complete //geometric query
var prevent_delegate = CollidableCalls.preventfunctions[collision_function_table_id];
if (prevent_delegate == null) { Grid_Query_Counter.Put(other.Tag); continue; }
var a = CollidableCalls.preventfunctions[collision_function_table_id](c, other);
//if the query returns true mark as touching
if (a) { c.Contacted.Add(other); other.Contacted.Add(c); }
//marks it as tested in this query
Grid_Query_Counter.Put(other.Tag);
}
}
}
//adds it to the grid if the key doesn't exist it creates the list first
for (int i = -1; i < totalxcells + 1; i++)
for (int j = -1; j < totalycells + 1; j++)
{
var index = new CoordsInt((short)(gx + i), (short)(gy + j));
if (!grid[own_group].ContainsKey(index)) grid[own_group][index] = new List<Collidable>();
grid[own_group][index].Add(c);
}
}
[...]
}
First. Profile your code. Even if you just use manually inserted time stamps to surround blocks you're interested in. I prefer to use the profiler that comes built into Visual Studio Pro.
However, based in your description, I would assume your problems are due to too many draw calls. Once you exceed 200-400 draw calls per frame your performance can drop dramatically. Try batching your rendering and see if this improves performance.
You can use a profiler such as ANTS Profiler to see what may be the problem.
Without any code theres not much I can do.