Dijkstra Shortest-Paths fast recalculation if only an edge got removed - c#

I'm trying to calculate the shortest paths. This does work with the below pasted implementation of Dijkstra. However I want to speed the things up.
I use this implementation to decide to which field I want to go next. The graph represents an two dimensional array where all fields are connected to each neighbours. But over time the following happens: I need to remove some edges (there are obstacles). The start node is my current position which does also change over time.
This means:
I do never add a node, never add a new edge, never change the weight of an edge. The only operation is removing an edge
The start node does change over time
Questions:
Is there an algorithm wich can do a fast recalculation of the shortest-paths when I know that the only change in the graph is the removal of an edge?
Is there an algorithm wich allows me to fast recalculate the shortest path when the start node changes only to one of it's neighbours?
Is another algorithm maybe better suited for my problem?
Thx for your help
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.Text;
public class Dijkstra<T>
{
private Node<T> calculatedStart;
private ReadOnlyCollection<Node<T>> Nodes {
get ;
set ;
}
private ReadOnlyCollection<Edge<T>> Edges {
get;
set;
}
private List<Node<T>> NodesToInspect {
get;
set ;
}
private Dictionary<Node<T>, int> Distance {
get ;
set ;
}
private Dictionary<Node<T>, Node<T>> PreviousNode {
get;
set ;
}
public Dijkstra (ReadOnlyCollection<Edge<T>> edges, ReadOnlyCollection<Node<T>> nodes)
{
Edges = edges;
Nodes = nodes;
NodesToInspect = new List<Node<T>> ();
Distance = new Dictionary<Node<T>, int> ();
PreviousNode = new Dictionary<Node<T>, Node<T>> ();
foreach (Node<T> n in Nodes) {
PreviousNode.Add (n, null);
NodesToInspect.Add (n);
Distance.Add (n, int.MaxValue);
}
}
public LinkedList<T> GetPath (T start, T destination)
{
Node<T> startNode = new Node<T> (start);
Node<T> destinationNode = new Node<T> (destination);
CalculateAllShortestDistances (startNode);
// building path going back from the destination to the start always taking the nearest node
LinkedList<T> path = new LinkedList<T> ();
path.AddFirst (destinationNode.Value);
while (PreviousNode[destinationNode] != null) {
destinationNode = PreviousNode [destinationNode];
path.AddFirst (destinationNode.Value);
}
path.RemoveFirst ();
return path;
}
private void CalculateAllShortestDistances (Node<T> startNode)
{
if (startNode.Value.Equals (calculatedStart)) {
return;
}
Distance [startNode] = 0;
while (NodesToInspect.Count > 0) {
Node<T> nearestNode = GetNodeWithSmallestDistance ();
// if we cannot find another node with the function above we can exit the algorithm and clear the
// nodes to inspect because they would not be reachable from the start or will not be able to shorten the paths...
// this algorithm does also implicitly kind of calculate the minimum spanning tree...
if (nearestNode == null) {
NodesToInspect.Clear ();
} else {
foreach (Node<T> neighbour in GetNeighborsFromNodesToInspect(nearestNode)) {
// calculate distance with the currently inspected neighbour
int dist = Distance [nearestNode] + GetDirectDistanceBetween (nearestNode, neighbour);
// set the neighbour as shortest if it is better than the current shortest distance
if (dist < Distance [neighbour]) {
Distance [neighbour] = dist;
PreviousNode [neighbour] = nearestNode;
}
}
NodesToInspect.Remove (nearestNode);
}
}
calculatedStart = startNode;
}
private Node<T> GetNodeWithSmallestDistance ()
{
int distance = int.MaxValue;
Node<T> smallest = null;
foreach (Node<T> inspectedNode in NodesToInspect) {
if (Distance [inspectedNode] < distance) {
distance = Distance [inspectedNode];
smallest = inspectedNode;
}
}
return smallest;
}
private List<Node<T>> GetNeighborsFromNodesToInspect (Node<T> n)
{
List<Node<T>> neighbors = new List<Node<T>> ();
foreach (Edge<T> e in Edges) {
if (e.Start.Equals (n) && NodesToInspect.Contains (n)) {
neighbors.Add (e.End);
}
}
return neighbors;
}
private int GetDirectDistanceBetween (Node<T> startNode, Node<T> endNode)
{
foreach (Edge<T> e in Edges) {
if (e.Start.Equals (startNode) && e.End.Equals (endNode)) {
return e.Distance;
}
}
return int.MaxValue;
}
}

Is there an algorithm wich can do a fast recalculation of the shortest-paths when I know that the only change in the graph is the removal of an edge?
Is there an algorithm wich allows me to fast recalculate the shortest path when the start node changes only to one of it's neighbours?
The answer to both of these questions is yes.
For the first case, the algorithm you're looking for is called LPA* (sometimes, less commonly, called Incremental A*. The title on that paper is outdated). It's a (rather complicated) modification to A* that allows fast recalculation of best paths when only a few edges have changed.
Like A*, LPA* requires an admissible distance heuristic. If no such heuristic exists, you can just set it to 0. Doing this in A* will essentially turn it into Djikstra's algorithm; doing this in LPA* will turn it into an obscure, rarely-used algorithm called DynamicSWSF-SP.
For the second case, you're looking for D*-Lite. It is a pretty simple modification to LPA* (simple, at least, once you understand LPA*) that does incremental pathfinding as the unit moves from start-to-finish and new information is gained (edges are added/removed/changed). It is primarily used for robots traversing an unknown or partially-known terrain.
I've written up a fairly comprehensive answer (with links to papers, in the question) on various pathfinding algorithms here.

Related

How to get an item from a list by its property, and then use it's other property?

class Node
{
int number;
Vector2 position;
public Node(int number, Vector2 position)
{
this.number = number;
this.position = position;
}
}
List<Node>nodes = new List<Node>();
for (int i = 0; i < nodes.Count; i++) //basically a foreach
{
// Here i would like to find each node from the list, in the order of their numbers,
// and check their vectors
}
So, as the code pretty much tells, i am wondering how i can
find a specific node from the list, specifically one with the attribute "numbers" being i (Eg going through all of them in the order of their "number" attribute).
check its other attribute
Have tried:
nodes.Find(Node => Node.number == i);
Node test = nodes[i];
place = test.position
they cant apparently access node.number / node.position due to its protection level.
Also the second one has the problem that the nodes have to be sorted first.
Also looked at this question
but [] solution is in the "Tried" caterology, foreach solution doesn't seem to work for custom classes.
I'm a coding newbie (Like 60 hours), so don't
Explain in a insanely hard way.
Say i am dumb for not knowing a this basic thing.
Thanks!
I would add properties for Number and Position, making them available to outside users (currently their access modifier is private):
class Node
{
public Node(int number, Vector2 position)
{
this.Number = number;
this.Position = position;
}
public int Number { get; private set; }
public Vector2 Position { get; private set; }
}
Now your original attempt should work:
nodes.Find(node => node.Number == i);
However, it sounds like sorting the List<Node> and then accessing by index would be faster. You would be sorting the list once and directly indexing the list vs looking through the list on each iteration for the item you want.
List<Node> SortNodes(List<Node> nodes)
{
List<Node> sortedNodes = new List<Node>();
int length = nodes.Count; // The length gets less and less every time, but we still need all the numbers!
int a = 0; // The current number we are looking for
while (a < length)
{
for (int i = 0; i < nodes.Count; i++)
{
// if the node's number is the number we are looking for,
if (nodes[i].number == a)
{
sortedNodes.Add(list[i]); // add it to the list
nodes.RemoveAt(i); // and remove it so we don't have to search it again.
a++; // go to the next number
break; // break the loop
}
}
}
return sortedNodes;
}
This is a simple sort function. You need to make the number property public first.
It will return a list full of nodes in the order you want to.
Also: The searching goes faster as more nodes are added to the sorted nodes list.
Make sure you that all nodes have a different number! Otherwise it would get stuck in an infinite loop!

Difficulty with Dijkstra's algorithm

I am new to advanced algorithms, so please bear with me. I am currently trying to get Dijkstra's algorithm to work, and have spend 2 days trying to figure this out. I also read the pseudo-code on Wikipedia and got this far. I want to get the shortest distance between two vertices. In the below sample, I keep getting the wrong distance. Please help?
Sample graph setup is as follow:
Graph graph = new Graph();
graph.Vertices.Add(new Vertex("A"));
graph.Vertices.Add(new Vertex("B"));
graph.Vertices.Add(new Vertex("C"));
graph.Vertices.Add(new Vertex("D"));
graph.Vertices.Add(new Vertex("E"));
graph.Edges.Add(new Edge
{
From = graph.Vertices.FindVertexByName("A"),
To = graph.Vertices.FindVertexByName("B"),
Weight = 5
});
graph.Edges.Add(new Edge
{
From = graph.Vertices.FindVertexByName("B"),
To = graph.Vertices.FindVertexByName("C"),
Weight = 4
});
graph.Edges.Add(new Edge
{
From = graph.Vertices.FindVertexByName("C"),
To = graph.Vertices.FindVertexByName("D"),
Weight = 8
});
graph.Edges.Add(new Edge
{
From = graph.Vertices.FindVertexByName("D"),
To = graph.Vertices.FindVertexByName("C"),
Weight = 8
});
//graph is passed as param with source and dest vertices
public int Run(Graph graph, Vertex source, Vertex destvertex)
{
Vertex current = source;
List<Vertex> queue = new List<Vertex>();
foreach (var vertex in graph.Vertices)
{
vertex.Weight = int.MaxValue;
vertex.PreviousVertex = null;
vertex.State = VertexStates.UnVisited;
queue.Add(vertex);
}
current = graph.Vertices.FindVertexByName(current.Name);
current.Weight = 0;
queue.Add(current);
while (queue.Count > 0)
{
Vertex minDistance = queue.OrderBy(o => o.Weight).FirstOrDefault();
queue.Remove(minDistance);
if (current.Weight == int.MaxValue)
{
break;
}
minDistance.Neighbors = graph.GetVerTextNeigbours(current);
foreach (Vertex neighbour in minDistance.Neighbors)
{
Edge edge = graph.Edges.FindEdgeByStartingAndEndVertex(minDistance, neighbour);
int dist = minDistance.Weight + (edge.Weight);
if (dist < neighbour.Weight)
{
//from this point onwards i get stuck
neighbour.Weight = dist;
neighbour.PreviousVertex = minDistance;
queue.Remove(neighbour);
queueVisited.Enqueue(neighbor);
}
}
minDistance.State = VertexStates.Visited;
}
//here i want to record all node that was visited
while (queueVisited.Count > 0)
{
Vertex temp = queueVisited.Dequeue();
count += temp.Neighbors.Sum(x => x.Weight);
}
return count;
}
There are several immediate issues with the above code.
The current variable is never re-assigned, nor is the object mutatd, so graph.GetVerTextNeigbours(current) always returning the same set.
As a result, this will never even find the target vertex if more than one edge must be traversed.
The neighbor node is removed from the queue via queue.Remove(neighbour). This is incorrect and can prevent a node/vertex from being [re]explored. Instead, it should just have the weight updated in the queue.
In this case the "decrease-key v in Q" is the psuedo-code results in a NO-OP in the implementation due object-mutation and not using a priority queue (i.e. sorted) structure.
The visited queue, as per queueVisited.Enqueue(neighbor) and the later "sum of all neighbor weights of all visited nodes", is incorrect.
After the algorithm runs, the cost (Weight) of any node/vertex that was explored is the cost of the shortest path to that vertex. Thus, you can trivially ask the goal vertex what the cost is.
In addition, for the sake of the algorithm simplicity, I recommend giving up the "Graph/Vertex/Edge" layout initially created and, instead, pass in a simple Node. This Node can enumerate its neighbor Nodes, knows the weights to each neighbor, its own current weight, and the best-cost node to it.
Simply build this Node graph once, at the start, then pass the starting Node. This eliminates (or greatly simplifies) calls like GetVerTextNeighbors and FindEdgeByStartingAndEndVertex, and strange/unnecessary side-effects like assigning Neighbors each time the algorithm visits the vertex.

Replacing two dimensional array with other data structure

I have a complex problem, I don't know whether I can describe it properly or not.
I have two dimensional array of objects of a class. Currently my algorithm operates only on this two dimensional array but only some of the locations of that array are occupied. (almost 40%)
It works fine for small data set but if I have large data set (large number of elements of that 2d array e.g. 10000) then the program becomes memory exhaustive. Because I have nested loops that make 10000 * 10000 = 100000000 iterations.
Can I replace the 2 d array with Hashtable or some other data structure? My main aim is to reduce the number of iterations only by changing the data structure.
Pardon me for not explaining properly.
I am developing using C#
Sounds like the data structure you have is a sparse matrix and I'm going to point you to Are there any storage optimized Sparse Matrix implementations in C#?
You can create a key for a dictionary from the array coordinates. Something like:
int key = x * 46000 + y;
(This naturally works for coordinates resembling an array up to 46000x46000, which is about what you can fit in an int. If you need to represent a larger array, you would use a long value as key.)
With the key you can store and retreive the object in a Dictionary<int, YourClass>. Storing and retrieving values from the dictionary is quite fast, not much slower than using an array.
You can iterate the items in the dictionary, but you won't get them in a predictable order, i.e. not the same as looping the x and y coordinates of an array.
If you need high performance you can roll down your own data structure.
If the objects can be contained in only one container and not moved to other containers, you can do a custom hashset like data structure.
You add X, Y and Next fields into your class.
You make a singly linked list of your object stored in an array that is your hash table.
This can be very very fast.
I wrote it from scratch, there may be bugs.
Clear, and rehash are not implemented, this is a demonstration only.
Complexity of all operation is averaged O(1).
To make easy to enumerate on all nodes skipping empty nodes, there is a doubly linked list. Complexity of insertion and removal from a doubly linked list is O(1), and you will be able to enumerate all nodes skipping unused nodes, so the complexity for enumerating all nodes is O(n) where n is the number of nodes, not the "virtual" size of this sparse matrix.
Using a doubly linked list you can enumerate items in the same order as you insert it.
The order is unrelated to X and Y coordinates.
public class Node
{
internal NodeTable pContainer;
internal Node pTableNext;
internal int pX;
internal int pY;
internal Node pLinkedListPrev;
internal Node pLinkedListNext;
}
public class NodeTable :
IEnumerable<Node>
{
private Node[] pTable;
private Node pLinkedListFirst;
private Node pLinkedListLast;
// Capacity must be a prime number great enough as much items you want to store.
// You can make this dynamic too but need some more work (rehashing and prime number computation).
public NodeTable(int capacity)
{
this.pTable = new Node[capacity];
}
public int GetHashCode(int x, int y)
{
return (x + y * 104729); // Must be a prime number
}
public Node Get(int x, int y)
{
int bucket = (GetHashCode(x, y) & 0x7FFFFFFF) % this.pTable.Length;
for (Node current = this.pTable[bucket]; current != null; current = current.pTableNext)
{
if (current.pX == x && current.pY == y)
return current;
}
return null;
}
public IEnumerator<Node> GetEnumerator()
{
// Replace yield with a custom struct Enumerator to optimize performances.
for (Node node = this.pLinkedListFirst, next; node != null; node = next)
{
next = node.pLinkedListNext;
yield return node;
}
}
IEnumerator IEnumerable.GetEnumerator()
{
return this.GetEnumerator();
}
public bool Set(int x, int y, Node node)
{
if (node == null || node.pContainer != null)
{
int bucket = (GetHashCode(x, y) & 0x7FFFFFFF) % this.pTable.Length;
for (Node current = this.pTable[bucket], prev = null; current != null; current = current.pTableNext)
{
if (current.pX == x && current.pY == y)
{
this.fRemoveFromLinkedList(current);
if (node == null)
{
// Remove from table linked list
if (prev != null)
prev.pTableNext = current.pTableNext;
else
this.pTable[bucket] = current.pTableNext;
current.pTableNext = null;
}
else
{
// Replace old node from table linked list
node.pTableNext = current.pTableNext;
current.pTableNext = null;
if (prev != null)
prev.pTableNext = node;
else
this.pTable[bucket] = node;
node.pContainer = this;
node.pX = x;
node.pY = y;
this.fAddToLinkedList(node);
}
return true;
}
prev = current;
}
// New node.
node.pContainer = this;
node.pX = x;
node.pY = y;
// Add to table linked list
node.pTableNext = this.pTable[bucket];
this.pTable[bucket] = node;
// Add to global linked list
this.fAddToLinkedList(node);
return true;
}
return false;
}
private void fRemoveFromLinkedList(Node node)
{
Node prev = node.pLinkedListPrev;
Node next = node.pLinkedListNext;
if (prev != null)
prev.pLinkedListNext = next;
else
this.pLinkedListFirst = next;
if (next != null)
next.pLinkedListPrev = prev;
else
this.pLinkedListLast = prev;
node.pLinkedListPrev = null;
node.pLinkedListNext = null;
}
private void fAddToLinkedList(Node node)
{
node.pLinkedListPrev = this.pLinkedListLast;
this.pLinkedListLast = node;
if (this.pLinkedListFirst == null)
this.pLinkedListFirst = node;
}
}
arrays give multiple features:
A way of organizing data as a list of elements
A way to access the data elements by index number (1st, 2nd, 3rd etc)
But a common downside (depends on the language and runtime) is that arrays are often work poorly as a sparse data structure--if you don't need all of the array elements then you end up with wasted memory space.
So, yes, a hashtable will usually save space over an array.
But You asked My main aim is to reduce the number of iterations only by changing the data structure. In order to answer that question, we need to know more about your algorithm--what you're doing in each loop of your program.
For example, there are many ways to sort an array or a matrix. The different algorithms for sorting use differing numbers of iterations.

Remove all the objects from a list that have the same value as any other Object from an other List. XNA

I have two Lists of vector2: Position and Floor and I'm trying to do this:
if a Position is the same as a Floor then delete the position from the List.
Here is what I thought would work but it doesn't:
public void GenerateFloor()
{
//I didn't past all, the code add vectors to the floor List, etc.
block.Floor.Add(new Vector2(block.Texture.Width, block.Texture.Height) + RoomLocation);
// And here is the way I thought to delete the positions:
block.Positions.RemoveAll(FloorSafe);
}
private bool FloorSafe(Vector2 x)
{
foreach (Vector2 j in block.Floor)
{
return x == j;
}
//or
for (int n = 0; n < block.Floor.Count; n++)
{
return x == block.Floor[n];
}
}
I know this is not the good way, so how can I wright it? I need to delete all the Positions Vector2 that are the same As any of the Floors Vector2.
===============================================================================
EDIT:
It works! For people searching how to do it, here is my final code of the answer of Hexxagonal:
public void FloorSafe()
{
//Gets all the Vectors that are not equal to the Positions List.
IEnumerable<Vector2> ReversedResult = block.Positions.Except(block.Floor);
//Gets all the Vectors that are not equal to the result..
//(the ones that are equal to the Positions).
IEnumerable<Vector2> Result = block.Positions.Except(ReversedResult);
foreach (Vector2 Positions in Result.ToList())
{
block.Positions.Remove(Positions); //Remove all the vectors from the List.
}
}
You could do a LINQ except. This will remove everything from the Positions collection that is not in the Floor collections.
result = block.Positions.Except(block.Floor)

C# XNA: Optimizing Collision Detection?

I'm working on a simple demo for collision detection, which contains only a bunch of objects bouncing around in the window. (The goal is to see how many objects the game can handle at once without dropping frames.)
There is gravity, so the objects are either moving or else colliding with a wall.
The naive solution was O(n^2):
foreach Collidable c1:
foreach Collidable c2:
checkCollision(c1, c2);
This is pretty bad. So I set up CollisionCell objects, which maintain information about a portion of the screen. The idea is that each Collidable only needs to check for the other objects in its cell. With 60 px by 60 px cells, this yields almost a 10x improvement, but I'd like to push it further.
A profiler has revealed that the the code spends 50% of its time in the function each cell uses to get its contents. Here it is:
// all the objects in this cell
public ICollection<GameObject> Containing
{
get
{
ICollection<GameObject> containing = new HashSet<GameObject>();
foreach (GameObject obj in engine.GameObjects) {
// 20% of processor time spent in this conditional
if (obj.Position.X >= bounds.X &&
obj.Position.X < bounds.X + bounds.Width &&
obj.Position.Y >= bounds.Y &&
obj.Position.Y < bounds.Y + bounds.Height) {
containing.Add(obj);
}
}
return containing;
}
}
Of that 20% of the program's time is spent in that conditional.
Here is where the above function gets called:
// Get a list of lists of cell contents
List<List<GameObject>> cellContentsSet = cellManager.getCellContents();
// foreach item, only check items in the same cell
foreach (List<GameObject> cellMembers in cellContentsSet) {
foreach (GameObject item in cellMembers) {
// process collisions
}
}
//...
// Gets a list of list of cell contents (each sub list = 1 cell)
internal List<List<GameObject>> getCellContents() {
List<List<GameObject>> result = new List<List<GameObject>>();
foreach (CollisionCell cell in cellSet) {
result.Add(new List<GameObject>(cell.Containing.ToArray()));
}
return result;
}
Right now, I have to iterate over every cell - even empty ones. Perhaps this could be improved on somehow, but I'm not sure how to verify that a cell is empty without looking at it somehow. (Maybe I could implement something like sleeping objects, in some physics engines, where if an object will be still for a while it goes to sleep and is not included in calculations for every frame.)
What can I do to optimize this? (Also, I'm new to C# - are there any other glaring stylistic errors?)
When the game starts lagging out, the objects tend to be packed fairly tightly, so there's not that much motion going on. Perhaps I can take advantage of this somehow, writing a function to see if, given an object's current velocity, it can possibly leave its current cell before the next call to Update()
UPDATE 1 I decided to maintain a list of the objects that were found to be in the cell at last update, and check those first to see if they were still in the cell. Also, I maintained an area of the CollisionCell variable, when when the cell was filled I could stop looking. Here is my implementation of that, and it made the whole demo much slower:
// all the objects in this cell
private ICollection<GameObject> prevContaining;
private ICollection<GameObject> containing;
internal ICollection<GameObject> Containing {
get {
return containing;
}
}
/**
* To ensure that `containing` and `prevContaining` are up to date, this MUST be called once per Update() loop in which it is used.
* What is a good way to enforce this?
*/
public void updateContaining()
{
ICollection<GameObject> result = new HashSet<GameObject>();
uint area = checked((uint) bounds.Width * (uint) bounds.Height); // the area of this cell
// first, try to fill up this cell with objects that were in it previously
ICollection<GameObject>[] toSearch = new ICollection<GameObject>[] { prevContaining, engine.GameObjects };
foreach (ICollection<GameObject> potentiallyContained in toSearch) {
if (area > 0) { // redundant, but faster?
foreach (GameObject obj in potentiallyContained) {
if (obj.Position.X >= bounds.X &&
obj.Position.X < bounds.X + bounds.Width &&
obj.Position.Y >= bounds.Y &&
obj.Position.Y < bounds.Y + bounds.Height) {
result.Add(obj);
area -= checked((uint) Math.Pow(obj.Radius, 2)); // assuming objects are square
if (area <= 0) {
break;
}
}
}
}
}
prevContaining = containing;
containing = result;
}
UPDATE 2 I abandoned that last approach. Now I'm trying to maintain a pool of collidables (orphans), and remove objects from them when I find a cell that contains them:
internal List<List<GameObject>> getCellContents() {
List<GameObject> orphans = new List<GameObject>(engine.GameObjects);
List<List<GameObject>> result = new List<List<GameObject>>();
foreach (CollisionCell cell in cellSet) {
cell.updateContaining(ref orphans); // this call will alter orphans!
result.Add(new List<GameObject>(cell.Containing));
if (orphans.Count == 0) {
break;
}
}
return result;
}
// `orphans` is a list of GameObjects that do not yet have a cell
public void updateContaining(ref List<GameObject> orphans) {
ICollection<GameObject> result = new HashSet<GameObject>();
for (int i = 0; i < orphans.Count; i++) {
// 20% of processor time spent in this conditional
if (orphans[i].Position.X >= bounds.X &&
orphans[i].Position.X < bounds.X + bounds.Width &&
orphans[i].Position.Y >= bounds.Y &&
orphans[i].Position.Y < bounds.Y + bounds.Height) {
result.Add(orphans[i]);
orphans.RemoveAt(i);
}
}
containing = result;
}
This only yields a marginal improvement, not the 2x or 3x I'm looking for.
UPDATE 3 Again I abandoned the above approaches, and decided to let each object maintain its current cell:
private CollisionCell currCell;
internal CollisionCell CurrCell {
get {
return currCell;
}
set {
currCell = value;
}
}
This value gets updated:
// Run 1 cycle of this object
public virtual void Run()
{
position += velocity;
parent.CellManager.updateContainingCell(this);
}
CellManager code:
private IDictionary<Vector2, CollisionCell> cellCoords = new Dictionary<Vector2, CollisionCell>();
internal void updateContainingCell(GameObject gameObject) {
CollisionCell currCell = findContainingCell(gameObject);
gameObject.CurrCell = currCell;
if (currCell != null) {
currCell.Containing.Add(gameObject);
}
}
// null if no such cell exists
private CollisionCell findContainingCell(GameObject gameObject) {
if (gameObject.Position.X > GameEngine.GameWidth
|| gameObject.Position.X < 0
|| gameObject.Position.Y > GameEngine.GameHeight
|| gameObject.Position.Y < 0) {
return null;
}
// we'll need to be able to access these outside of the loops
uint minWidth = 0;
uint minHeight = 0;
for (minWidth = 0; minWidth + cellWidth < gameObject.Position.X; minWidth += cellWidth) ;
for (minHeight = 0; minHeight + cellHeight < gameObject.Position.Y; minHeight += cellHeight) ;
CollisionCell currCell = cellCoords[new Vector2(minWidth, minHeight)];
// Make sure `currCell` actually contains gameObject
Debug.Assert(gameObject.Position.X >= currCell.Bounds.X && gameObject.Position.X <= currCell.Bounds.Width + currCell.Bounds.X,
String.Format("{0} should be between lower bound {1} and upper bound {2}", gameObject.Position.X, currCell.Bounds.X, currCell.Bounds.X + currCell.Bounds.Width));
Debug.Assert(gameObject.Position.Y >= currCell.Bounds.Y && gameObject.Position.Y <= currCell.Bounds.Height + currCell.Bounds.Y,
String.Format("{0} should be between lower bound {1} and upper bound {2}", gameObject.Position.Y, currCell.Bounds.Y, currCell.Bounds.Y + currCell.Bounds.Height));
return currCell;
}
I thought this would make it better - now I only have to iterate over collidables, not all collidables * cells. Instead, the game is now hideously slow, delivering only 1/10th of its performance with my above approaches.
The profiler indicates that a different method is now the main hot spot, and the time to get neighbors for an object is trivially short. That method didn't change from before, so perhaps I'm calling it WAY more than I used to...
It spends 50% of its time in that function because you call that function a lot. Optimizing that one function will only yield incremental improvements to performance.
Alternatively, just call the function less!
You've already started down that path by setting up a spatial partitioning scheme (lookup Quadtrees to see a more advanced form of your technique).
A second approach is to break your N*N loop into an incremental form and to use a CPU budget.
You can allocate a CPU budget for each of the modules that want action during frame times (during Updates). Collision is one of these modules, AI might be another.
Let's say you want to run your game at 60 fps. This means you have about 1/60 s = 0.0167 s of CPU time to burn between frames. No we can split those 0.0167 s between our modules. Let's give collision 30% of the budget: 0.005 s.
Now your collision algorithm knows that it can only spend 0.005 s working. So if it runs out of time, it will need to postpone some tasks for later - you will make the algorithm incremental. Code for achieving this can be as simple as:
const double CollisionBudget = 0.005;
Collision[] _allPossibleCollisions;
int _lastCheckedCollision;
void HandleCollisions() {
var startTime = HighPerformanceCounter.Now;
if (_allPossibleCollisions == null ||
_lastCheckedCollision >= _allPossibleCollisions.Length) {
// Start a new series
_allPossibleCollisions = GenerateAllPossibleCollisions();
_lastCheckedCollision = 0;
}
for (var i=_lastCheckedCollision; i<_allPossibleCollisions.Length; i++) {
// Don't go over the budget
if (HighPerformanceCount.Now - startTime > CollisionBudget) {
break;
}
_lastCheckedCollision = i;
if (CheckCollision(_allPossibleCollisions[i])) {
HandleCollision(_allPossibleCollisions[i]);
}
}
}
There, now it doesn't matter how fast the collision code is, it will be done as quickly as is possible without affecting the user's perceived performance.
Benefits include:
The algorithm is designed to run out of time, it just resumes on the next frame, so you don't have to worry about this particular edge case.
CPU budgeting becomes more and more important as the number of advanced/time consuming algorithms increases. Think AI. So it's a good idea to implement such a system early on.
Human response time is less than 30 Hz, your frame loop is running at 60 Hz. That gives the algorithm 30 frames to complete its work, so it's OK that it doesn't finish its work.
Doing it this way gives stable, data-independent frame rates.
It still benefits from performance optimizations to the collision algorithm itself.
Collision algorithms are designed to track down the "sub frame" in which collisions happened. That is, you will never be so lucky as to catch a collision just as it happens - thinking you're doing so is lying to yourself.
I can help here; i wrote my own collision detection as an experiment. I think i can tell you right now that you won't get the performance you need without changing algorithms. Sure, the naive way is nice, but only works for so many items before collapsing. What you need is Sweep and prune. The basic idea goes like this (from my collision detection library project):
using System.Collections.Generic;
using AtomPhysics.Interfaces;
namespace AtomPhysics.Collisions
{
public class SweepAndPruneBroadPhase : IBroadPhaseCollider
{
private INarrowPhaseCollider _narrowPhase;
private AtomPhysicsSim _sim;
private List<Extent> _xAxisExtents = new List<Extent>();
private List<Extent> _yAxisExtents = new List<Extent>();
private Extent e1;
public SweepAndPruneBroadPhase(INarrowPhaseCollider narrowPhase)
{
_narrowPhase = narrowPhase;
}
public AtomPhysicsSim Sim
{
get { return _sim; }
set { _sim = null; }
}
public INarrowPhaseCollider NarrowPhase
{
get { return _narrowPhase; }
set { _narrowPhase = value; }
}
public bool NeedsNotification { get { return true; } }
public void Add(Nucleus nucleus)
{
Extent xStartExtent = new Extent(nucleus, ExtentType.Start);
Extent xEndExtent = new Extent(nucleus, ExtentType.End);
_xAxisExtents.Add(xStartExtent);
_xAxisExtents.Add(xEndExtent);
Extent yStartExtent = new Extent(nucleus, ExtentType.Start);
Extent yEndExtent = new Extent(nucleus, ExtentType.End);
_yAxisExtents.Add(yStartExtent);
_yAxisExtents.Add(yEndExtent);
}
public void Remove(Nucleus nucleus)
{
foreach (Extent e in _xAxisExtents)
{
if (e.Nucleus == nucleus)
{
_xAxisExtents.Remove(e);
}
}
foreach (Extent e in _yAxisExtents)
{
if (e.Nucleus == nucleus)
{
_yAxisExtents.Remove(e);
}
}
}
public void Update()
{
_xAxisExtents.InsertionSort(comparisonMethodX);
_yAxisExtents.InsertionSort(comparisonMethodY);
for (int i = 0; i < _xAxisExtents.Count; i++)
{
e1 = _xAxisExtents[i];
if (e1.Type == ExtentType.Start)
{
HashSet<Extent> potentialCollisionsX = new HashSet<Extent>();
for (int j = i + 1; j < _xAxisExtents.Count && _xAxisExtents[j].Nucleus.ID != e1.Nucleus.ID; j++)
{
potentialCollisionsX.Add(_xAxisExtents[j]);
}
HashSet<Extent> potentialCollisionsY = new HashSet<Extent>();
for (int j = i + 1; j < _yAxisExtents.Count && _yAxisExtents[j].Nucleus.ID != e1.Nucleus.ID; j++)
{
potentialCollisionsY.Add(_yAxisExtents[j]);
}
List<Extent> probableCollisions = new List<Extent>();
foreach (Extent e in potentialCollisionsX)
{
if (potentialCollisionsY.Contains(e) && !probableCollisions.Contains(e) && e.Nucleus.ID != e1.Nucleus.ID)
{
probableCollisions.Add(e);
}
}
foreach (Extent e2 in probableCollisions)
{
if (e1.Nucleus.DNCList.Contains(e2.Nucleus) || e2.Nucleus.DNCList.Contains(e1.Nucleus))
continue;
NarrowPhase.DoCollision(e1.Nucleus, e2.Nucleus);
}
}
}
}
private bool comparisonMethodX(Extent e1, Extent e2)
{
float e1PositionX = e1.Nucleus.NonLinearSpace != null ? e1.Nucleus.NonLinearPosition.X : e1.Nucleus.Position.X;
float e2PositionX = e2.Nucleus.NonLinearSpace != null ? e2.Nucleus.NonLinearPosition.X : e2.Nucleus.Position.X;
e1PositionX += (e1.Type == ExtentType.Start) ? -e1.Nucleus.Radius : e1.Nucleus.Radius;
e2PositionX += (e2.Type == ExtentType.Start) ? -e2.Nucleus.Radius : e2.Nucleus.Radius;
return e1PositionX < e2PositionX;
}
private bool comparisonMethodY(Extent e1, Extent e2)
{
float e1PositionY = e1.Nucleus.NonLinearSpace != null ? e1.Nucleus.NonLinearPosition.Y : e1.Nucleus.Position.Y;
float e2PositionY = e2.Nucleus.NonLinearSpace != null ? e2.Nucleus.NonLinearPosition.Y : e2.Nucleus.Position.Y;
e1PositionY += (e1.Type == ExtentType.Start) ? -e1.Nucleus.Radius : e1.Nucleus.Radius;
e2PositionY += (e2.Type == ExtentType.Start) ? -e2.Nucleus.Radius : e2.Nucleus.Radius;
return e1PositionY < e2PositionY;
}
private enum ExtentType { Start, End }
private sealed class Extent
{
private ExtentType _type;
public ExtentType Type
{
get
{
return _type;
}
set
{
_type = value;
_hashcode = 23;
_hashcode *= 17 + Nucleus.GetHashCode();
}
}
private Nucleus _nucleus;
public Nucleus Nucleus
{
get
{
return _nucleus;
}
set
{
_nucleus = value;
_hashcode = 23;
_hashcode *= 17 + Nucleus.GetHashCode();
}
}
private int _hashcode;
public Extent(Nucleus nucleus, ExtentType type)
{
Nucleus = nucleus;
Type = type;
_hashcode = 23;
_hashcode *= 17 + Nucleus.GetHashCode();
}
public override bool Equals(object obj)
{
return Equals(obj as Extent);
}
public bool Equals(Extent extent)
{
if (this.Nucleus == extent.Nucleus)
{
return true;
}
return false;
}
public override int GetHashCode()
{
return _hashcode;
}
}
}
}
and here's the code that does the insertion sort (more-or-less a direct translation of the pseudocode here):
/// <summary>
/// Performs an insertion sort on the list.
/// </summary>
/// <typeparam name="T">The type of the list supplied.</typeparam>
/// <param name="list">the list to sort.</param>
/// <param name="comparison">the method for comparison of two elements.</param>
/// <returns></returns>
public static void InsertionSort<T>(this IList<T> list, Func<T, T, bool> comparison)
{
for (int i = 2; i < list.Count; i++)
{
for (int j = i; j > 1 && comparison(list[j], list[j - 1]); j--)
{
T tempItem = list[j];
list.RemoveAt(j);
list.Insert(j - 1, tempItem);
}
}
}
IIRC, i was able to get an extremely large performance increase with that, especially when dealing with large numbers of colliding bodies. You'll need to adapt it for your code, but that's the basic premise behind sweep and prune.
The other thing i want to remind you is that you should use a profiler, like the one made by Red Gate. There's a free trial which should last you long enough.
It looks like you are looping through all the game objects just to see what objects are contained in a cell. It seems like a better approach would be to store the list of game objects that are in a cell for each cell. If you do that and each object knows what cells it is in, then moving objects between cells should be easy. This seems like it will yield the biggest performance gain.
Here is another optimization tip for determing what cells an object is in:
If you have already determined what cell(s) an object is in and know that based on the objects velocity it will not change cells for the current frame, there is no need to rerun the logic that determines what cells the object is in. You can do a quick check by creating a bounding box that contains all the cells that the object is in. You can then create a bounding box that is the size of the object + the velocity of the object for the current frame. If the cell bounding box contains the object + velocity bounding box, no further checks need to be done. If the object isn't moving, it's even easier and you can just use the object bounding box.
Let me know if that makes sense, or google / bing search for "Quad Tree", or if you don't mind using open source code, check out this awesome physics library: http://www.codeplex.com/FarseerPhysics
I'm in the exact same boat as you. I'm trying to create an overhead shooter and need to push efficiency to the max so I can have tons of bullets and enemies on screen at once.
I'd get all of my collidable objects in an array with a numbered index. This affords the opportunity to take advantage of an observation: if you iterate over the list fully for each item you'll be duplicating efforts. That is (and note, I'm making up variables names just to make it easier to spit out some pseudo-code)
if (objs[49].Intersects(objs[51]))
is equivalent to:
if (objs[51].Intersects(objs[49]))
So if you use a numbered index you can save some time by not duplicating efforts. Do this instead:
for (int i1 = 0; i1 < collidables.Count; i1++)
{
//By setting i2 = i1 + 1 you ensure an obj isn't checking collision with itself, and that objects already checked against i1 aren't checked again. For instance, collidables[4] doesn't need to check against collidables[0] again since this was checked earlier.
for (int i2 = i1 + 1; i2 < collidables.Count; i2++)
{
//Check collisions here
}
}
Also, I'd have each cell either have a count or a flag to determine if you even need to check for collisions. If a certain flag is set, or if the count is less than 2, than no need to check for collisions.
Just a heads up: Some people suggest farseer; which is a great 2D physics library for use with XNA. If you're in the market for a 3D physics engine for XNA, I've used bulletx (a c# port of bullet) in XNA projects to great effect.
Note: I have no affiliation to the bullet or bulletx projects.
An idea might be to use a bounding circle. Basically, when a Collidable is created, keep track of it's centre point and calculate a radius/diameter that contains the whole object. You can then do a first pass elimination using something like;
int r = C1.BoundingRadius + C2.BoundingRadius;
if( Math.Abs(C1.X - C2.X) > r && Math.Abs(C1.Y - C2.Y) > r )
/// Skip further checks...
This drops the comparisons to two for most objects, but how much this will gain you I'm not sure...profile!
There are a couple of things that could be done to speed up the process... but as far as I can see your method of checking for simple rectangular collision is just fine.
But I'd replace the check
if (obj.Position.X ....)
With
if (obj.Bounds.IntersercsWith(this.Bounds))
And I'd also replace the line
result.Add(new List<GameObject>(cell.Containing.ToArray()));
For
result.Add(new List<GameObject>(cell.Containing));
As the Containing property returns an ICollection<T> and that inherits the IEnumerable<T> that is accepted by the List<T> constructor.
And the method ToArray() simply iterates to the list returning an array, and this process is done again when creating the new list.
I know this Thread is old but i would say that the marked answar was completly wrong...
his code contain a fatal error and don´t give performance improvent´s it will take performence!
At first a little notic...
His code is created so that you have to call this code in your Draw methode but this is the wrong place for collision-detection. In your draw methode you should only draw nothing else!
But you can´t call HandleCollisions() in Update, because Update get a lots of more calls than Draw´s.
If you want call HandleCollisions() your code have to look like this... This code will prevent that your collision detection run more then once per frame.
private bool check = false;
protected override Update(GameTime gameTime)
{
if(!check)
{
check = true;
HandleCollisions();
}
}
protected override Draw(GameTime gameTime)
{
check = false;
}
Now let us take a look what´s wrong with HandleCollisions().
Example: We have 500 objects and we would do a check for every possible Collision without optimizing our detection.
With 500 object we should have 249500 collision checks (499X500 because we don´t want to check if an object collide with it´s self)
But with Frank´s code below we will lose 99.998% of your collosions (only 500 collision-checks will done). << THIS WILL INCREASE THE PERFORMENCES!
Why? Because _lastCheckedCollision will never be the same or greater then allPossibleCollisions.Length... and because of that you would only check the last index 499
for (var i=_lastCheckedCollision; i<_allPossibleCollisions.Length; i++)
_lastCheckedCollision = i;
//<< This could not be the same as _allPossibleCollisions.Length,
//because i have to be lower as _allPossibleCollisions.Length
you have to replace This
if (_allPossibleCollisions == null ||
_lastCheckedCollision >= _allPossibleCollisions.Length)
with this
if (_allPossibleCollisions == null ||
_lastCheckedCollision >= _allPossibleCollisions.Length - 1) {
so your whole code can be replaced by this.
private bool check = false;
protected override Update(GameTime gameTime)
{
if(!check)
{
check = true;
_allPossibleCollisions = GenerateAllPossibleCollisions();
for(int i=0; i < _allPossibleCollisions.Length; i++)
{
if (CheckCollision(_allPossibleCollisions[i]))
{
//Collision!
}
}
}
}
protected override Draw(GameTime gameTime)
{
check = false;
}
... this should be a lot of faster than your code ... and it works :D ...
RCIX answer should marked as correct because Frank´s answar is wrong.

Categories

Resources