I'm still relatively new to Photon since the depreciation of UNet. I'm having trouble getting and setting local custom properties. I'm trying to have two different teams (players and angels) be chosen. Each player starts as a spectator. A certain percentage of players are chosen to be the angels, and the rest are assigned as players. I can manage to get and set the property of a randomly chosen player, but I can't seem to assign the values for the remaining. The snippet of code is below.
private IEnumerator TeamBalance()
{
angelCount = Mathf.Floor(PhotonNetwork.PlayerList.Length * angelPercent);
currentAngels = angelCount;
for (int i = 0; i < angelCount;)
{
int index = Random.Range(0, PhotonNetwork.PlayerList.Length);
if (PhotonNetwork.PlayerList[index].CustomProperties["team"].ToString() == "spectator")
{
PhotonNetwork.PlayerList[index].CustomProperties["team"] = "angel";
i++;
}
}
foreach (var player in PhotonNetwork.PlayerList)
{
if (player.CustomProperties["team"].ToString() == "spectator")
{
player.CustomProperties["team"] = "player";
}
}
yield return null;
}
The end result for 3 players ends up picking 1 angel, but with 2 spectators still remaining.
You need to use Player.SetCustomProperties function to set properties instead of assigning them directly. This allows PUN to track what’s been changed and update properly.
https://doc-api.photonengine.com/en/pun/v2/class_photon_1_1_realtime_1_1_player.html#a0c1010eda4f775ff56f8c86b026be41e
Related
I'm working on a brickout game clone, so I'm trying to add a multiply the ball feature to the game. This Video shows the feature.
Here is my code:
//access to all active balls
Ball[] balls = FindObjectsOfType<Ball>();
//loop for the length of the balls
for (int i = 0; i <= balls.Length; i++)
{
if (balls.Length >= 1000)
{
return;
}
else
{
// here I'm trying to fix the issue by checking if the ball exists, so I will call the function if not. return. but console show me same message
if (balls[i] != null)
{
balls[i].CallM3x();
i++;
}
}
}
The unity console message is:
IndexOutOfRangeException: Index was outside the bounds of the array.
My code above is working fine, but the issue is some balls destroy before the for loop end its work by less than a second .
Any idea how to avoid this?
There are two fundamental issues with your loop:
you're looping up to i <= balls.Length which will always throw an error (there is no a[10] in an array of length 10).
you're incrementing i again within the loop, which will skip items
Change the loop condition to i < balls.Length and remove the i++ within the loop.
Also, you don't need to check if (balls.Length >= 1000) within the loop every time - you can check that before the loop (but what's the problem with a collection with more than 10,000 items?)
the issue is some balls destroy before the for loop end its work by less than a second
I don't see how this can be a problem. You're creating an array of references that you're looping over. Even if items are destroyed within the game, the object references still exist within your array. There may be some other flag to tell you that an object is destroyed (I am not a Unity expert), but they would not change to null inside your array.
A simple way to avoid the issue of re-referencing objects you have deleted in a loop is by counting i down from balls.Length to 0 instead of up from 0 to balls.Length.
for (int i = balls.Length - 1; i >= 0; i--)
Also, you might want to look again at your for-loop: you are going up to <= balls.Length, allowing the highest index to be = balls.Length. That index is out of bounds though, because we start counting at 0. I suggest using < instead of <= here.
Ball[] balls = FindObjectsOfType<Ball>();
for (int i = 0; i < balls.Length; i++)
{
if (balls.Length >= 1000)
{
return;
}
else
{
if (balls[i] != null)
{
balls[i].CallM3x();
// i++; <- This is your problem. Remove it.
}
}
}
I have a strict similar to this :
struct Chunk
{
public float d; // distance from player
public bool active; // is it active
}
I have an array of this struct.
What I need:
I need to sort it so the first element is an inactive chunk that is also the furthest from the player, than the next element is active and is also the closest to the player and that is the pattern,
Inactive, furthest
Active, closest
Inactive, furthest
Active, closest
Inactive, furthest
And so on...
Currently I'm using LINQ,
And I'm doing this :
chunk = chunk.AsParallel().OrderBy(x => x.active ? x.d : -x.d).ToArray();
But I don't know how to make it alternate one after another.
It looks like you want to split this into two lists, sort them, then interleave them.
var inactive = chunk.Where(x => !x.active).OrderByDescending(x => x.d);
var active = chunk.Where(x => x.active).OrderBy(x => x.d);
var interleaved = inactive.Zip(active, (a, b) => new [] { a, b }).SelectMany(x => x);
But I don't know how to make it alternate one after another.
If you sort the array so that all the inactives are at the start, descending and all the actives are at the end, descending..
OrderBy(x => x.active?1:0).ThenBy(x=>-x.d)
then you can take an item from the start, then an item from the end, then from the start + 1, then the end - 1 working your way inwards
public static IEnumerable<Chunk> Interleave(this Chunk[] chunks){
for(int s = 0, e = chunks.Length - 1; s<e;){
if(!chunks[s].active)
yield return chunks[s++];
if(chunks[e].active)
yield return chunks[e--];
}
}
There's a bit in this so let's unpack it. This is an extension method that acts on an array of chunk. It's a custom enumerator method, so you'd call foreach on it to use it
foreach(var c in chunk.Interleave())
It contains a for loop that tracks two variables, one for the start index and one for the end. The start increments and the end decrements. At some point they'll meet and s will no longer be less than e, which is when we stop:
for(int s = 0, e = chunks.Length - 1; s<e;){
We need to look at the chunk before we return it, if it's an inactive near the start, yield return it and bump the start on by one. s++ increments s, but resolves to the value s was before it incremented. It's thus conceptually like doing chunks[s]; s += 1; but in a one liner
if(!chunks[s].active)
yield return chunks[s++];
Then we look at the chunk near the end, if it's active then return the ending one and bump the end index down
The inactive chunks are tracked by s, and if s reaches an active chunk it stops returning (every pass of the loop it is skipped), which means e will work its way down towards s returning only the actives
Similarly if there are more inactives than actives, e will stop decrementing first and s will work its way up towards e
If you never came across yield return before think of it as a way to allow you to resume from where you left off rather than starting the method over again. It's used with enumerations to provide a way for the enumeration to return an item, then be moved on one and return the next item. It works a bit like saving your game and going doing something else, then coming back, realising your save game and carrying on from where you left off. Asking an enumerator for Next makes it load the game, play a bit, then save and stop.. Then you Next again and the latest save id loaded, play some more, save and stop. This way you gradually get through the game a bit at a time. If you started a new enumeration by calling Interleave again, that's like starting a new game over from the beginning
MSDN will get more detailed on yield return if you want to dig in more
Edit:
You can perform an in-place sort of your Chunk[] by having a custom comparer:
public class InactiveThenDistancedDescending : IComparer
{
public int Compare(object x, object y)
{
var a = (Chunk)x;
var b = (Chunk)y;
if(a.Active == b.Active)
return -a.Distance.CompareTo(b.Distance);
else
return a.Active.CompareTo(b.Active);
}
}
And:
Array.Sort(chunkArray, _someInstanceOfThatComparerAbove);
Not sure if you can do it with only one line of code.
I wrote a method that would only require the Array to be sorted once. Then, it enters either the next closest or furthest chunk based on the current index of the for loop (odd = closest, even = furthest). I remove the item from the sorted list to ensure that it will not be reentered in the results list. Finally, I return the results as an Array.
public Chunk[] SortArray(List<Chunk> list_to_sort)
{
//Setup Variables
var results = new List<Chunk>();
int count = list_to_sort.Count;
//Tracking the sorting list so that we only need to sort the list once
list_to_sort = list_to_sort.OrderBy(x => x.active).ThenBy(x => x.d).ToList();
//Loop through the list
for (int i = 0; i < count; i++)
{
results.Add(list_to_sort[i % 2 == 0 ? list_to_sort.Count - 1 : 0]);
list_to_sort.RemoveAt(i % 2 == 0 ? list_to_sort.Count - 1 : 0);
}
// Return Results
return results.ToArray();
}
There is probably a better way of doing this but hopefully it helps. Please note that I did not test this method.
I'm working on a music application that reads lilypond and midi-files.
Both filetypes needs to be turned into our own storage.
For lilypond you need to read repeat's and their alternatives.
There can however be difference in counting.
If there are three repeatings but two alternatives the first two repeatings get the first alternative and the third alternative gets the last alternative.
Due to re-use being starting at front i don't know how to do it.
My current code looks like this, so the only part missing is combining the repeatList and the altList.
I was hoping there is a math solution for this because flipping the arrays would be tragical for the preformance.
private List<Note> readRepeat(List<string> repeat, List<string> alt, int repeatCount)
{
List<Note> noteList = new List<Note>();
List<Note> repeatList = new List<Note>();
List<List<Note>> altList = new List<List<Note>>();
foreach (string line in repeat)
{
repeatList.AddRange(readNoteLine(line));
}
foreach (string line in alt)
{
altList.Add(readNoteLine(line));
}
while (repeatCount > 0)
{
List<Note> toAdd = repeatList.ToList(); // Clone the list, to destroy the reference
if (altList.Count() != 0)
{
// logic to add the right alt
}
noteList.AddRange(toAdd);
repeatCount--;
}
return noteList;
}
In the above code the two lists get populated with the notes.
Their function is as follow:
RepeatList: The basic list of notes that gets played x times
AltList: The list of possibilities to be added to the repeat list.
Some example I/O
RepeatCount = 4
AltList.Count() = 3
Repeat 1 gets: Alt 1
Repeat 2 gets: Alt 1
Repeat 3 gets: Alt 2
Repeat 4 gets: Alt 3
Example in visual style
From what I understand, here is some possible way to implement the logic part (basically you just have to maintain a cursor on the appropriate alternative). This is in place of your while loop.
int currentAlt = 0;
for (int currentRepeat = 0; currentRepeat < repeatCount; currentRepeat++)
{
noteList.AddRange(repeatList);
noteList.AddRange(altList[currentAlt]);
if (currentRepeat >= repeatCount - altList.Count)
{
currentAlt++;
}
}
I know that in order to clean up your code you use loops. i.e. instead of saying
myItem = myArray[0]
my2ndItem = myArray[1]
up to element 100 or so will take 100 lines of your code and will be ugly.
It can be slimmed down and look nice / be more efficient using a loop.
Let's say you're creating a game and you have 100 non player controlled enemies that spawn in an EXACT position, each position differing.
How on earth would you spawn every single one without using 100 lines of code?
I can understand if, say for example, you wanted them all 25 yards apart, you could just use a loop and increment by 25.
So that said, programming statically in this case is the only way I can see a possible outcome, yet I know programming dynamically is the way to go.
How do people do this? And if you could provide other similar examples that would be great.
var spawnLocs = new SpawnLocation[10];
if (RandomPosition)
{
for (int i = 0; i < spawnLocs.Length; i++)
{
spawnLocs[i] = // auto define using algorithm / logic
}
}
else
{
spawnLocs[0] = // manually define spawn location
spawnLocs[1] =
...
...
spawnLocs[9] =
}
Spawn new enemy:
void SpawnEnemy(int spawnedCount)
{
if (EnemyOnMap < 10 && spawnedCount < 100)
{
var rd = new Random();
Enemy e = new Enemy();
SpawnLocation spawnLoc = new SpawnLocation();
bool locationOk = false;
while(!locationOk)
{
spawnLoc = spawnLocs[rd.Next(0, spawnLoc.Length)];
if (!spawnLoc.IsOccupied)
{
locationOk = true;
}
}
e.SpawnLocation = spawnLoc;
this.Map.AddNewEnemy(e);
}
}
Just specify the positions as an array or list or whatever data-format suits the purpose, then implement a 3 line loop that reads from that input for each enemy, and finds and sets the corresponding position.
Most languages will have some kind of "shorthand" format for providing the data for an array or list directly, as in C# for instance:
var myList = new List<EnemyPosition>{
new EnemyPosition{ x = 0, y = 1 },
new EnemyPosition{ x = 45, y = 31 },
new EnemyPosition{ x = 12, y = 9 },
(...)
};
You could naturally put the same data in an XML file, or a database, or your grandmothers cupboard for that matter, just as long you have some interface to retrieve it when needed.
Then you can set it up with a loop, just as in your question.
I'm working on a simple demo for collision detection, which contains only a bunch of objects bouncing around in the window. (The goal is to see how many objects the game can handle at once without dropping frames.)
There is gravity, so the objects are either moving or else colliding with a wall.
The naive solution was O(n^2):
foreach Collidable c1:
foreach Collidable c2:
checkCollision(c1, c2);
This is pretty bad. So I set up CollisionCell objects, which maintain information about a portion of the screen. The idea is that each Collidable only needs to check for the other objects in its cell. With 60 px by 60 px cells, this yields almost a 10x improvement, but I'd like to push it further.
A profiler has revealed that the the code spends 50% of its time in the function each cell uses to get its contents. Here it is:
// all the objects in this cell
public ICollection<GameObject> Containing
{
get
{
ICollection<GameObject> containing = new HashSet<GameObject>();
foreach (GameObject obj in engine.GameObjects) {
// 20% of processor time spent in this conditional
if (obj.Position.X >= bounds.X &&
obj.Position.X < bounds.X + bounds.Width &&
obj.Position.Y >= bounds.Y &&
obj.Position.Y < bounds.Y + bounds.Height) {
containing.Add(obj);
}
}
return containing;
}
}
Of that 20% of the program's time is spent in that conditional.
Here is where the above function gets called:
// Get a list of lists of cell contents
List<List<GameObject>> cellContentsSet = cellManager.getCellContents();
// foreach item, only check items in the same cell
foreach (List<GameObject> cellMembers in cellContentsSet) {
foreach (GameObject item in cellMembers) {
// process collisions
}
}
//...
// Gets a list of list of cell contents (each sub list = 1 cell)
internal List<List<GameObject>> getCellContents() {
List<List<GameObject>> result = new List<List<GameObject>>();
foreach (CollisionCell cell in cellSet) {
result.Add(new List<GameObject>(cell.Containing.ToArray()));
}
return result;
}
Right now, I have to iterate over every cell - even empty ones. Perhaps this could be improved on somehow, but I'm not sure how to verify that a cell is empty without looking at it somehow. (Maybe I could implement something like sleeping objects, in some physics engines, where if an object will be still for a while it goes to sleep and is not included in calculations for every frame.)
What can I do to optimize this? (Also, I'm new to C# - are there any other glaring stylistic errors?)
When the game starts lagging out, the objects tend to be packed fairly tightly, so there's not that much motion going on. Perhaps I can take advantage of this somehow, writing a function to see if, given an object's current velocity, it can possibly leave its current cell before the next call to Update()
UPDATE 1 I decided to maintain a list of the objects that were found to be in the cell at last update, and check those first to see if they were still in the cell. Also, I maintained an area of the CollisionCell variable, when when the cell was filled I could stop looking. Here is my implementation of that, and it made the whole demo much slower:
// all the objects in this cell
private ICollection<GameObject> prevContaining;
private ICollection<GameObject> containing;
internal ICollection<GameObject> Containing {
get {
return containing;
}
}
/**
* To ensure that `containing` and `prevContaining` are up to date, this MUST be called once per Update() loop in which it is used.
* What is a good way to enforce this?
*/
public void updateContaining()
{
ICollection<GameObject> result = new HashSet<GameObject>();
uint area = checked((uint) bounds.Width * (uint) bounds.Height); // the area of this cell
// first, try to fill up this cell with objects that were in it previously
ICollection<GameObject>[] toSearch = new ICollection<GameObject>[] { prevContaining, engine.GameObjects };
foreach (ICollection<GameObject> potentiallyContained in toSearch) {
if (area > 0) { // redundant, but faster?
foreach (GameObject obj in potentiallyContained) {
if (obj.Position.X >= bounds.X &&
obj.Position.X < bounds.X + bounds.Width &&
obj.Position.Y >= bounds.Y &&
obj.Position.Y < bounds.Y + bounds.Height) {
result.Add(obj);
area -= checked((uint) Math.Pow(obj.Radius, 2)); // assuming objects are square
if (area <= 0) {
break;
}
}
}
}
}
prevContaining = containing;
containing = result;
}
UPDATE 2 I abandoned that last approach. Now I'm trying to maintain a pool of collidables (orphans), and remove objects from them when I find a cell that contains them:
internal List<List<GameObject>> getCellContents() {
List<GameObject> orphans = new List<GameObject>(engine.GameObjects);
List<List<GameObject>> result = new List<List<GameObject>>();
foreach (CollisionCell cell in cellSet) {
cell.updateContaining(ref orphans); // this call will alter orphans!
result.Add(new List<GameObject>(cell.Containing));
if (orphans.Count == 0) {
break;
}
}
return result;
}
// `orphans` is a list of GameObjects that do not yet have a cell
public void updateContaining(ref List<GameObject> orphans) {
ICollection<GameObject> result = new HashSet<GameObject>();
for (int i = 0; i < orphans.Count; i++) {
// 20% of processor time spent in this conditional
if (orphans[i].Position.X >= bounds.X &&
orphans[i].Position.X < bounds.X + bounds.Width &&
orphans[i].Position.Y >= bounds.Y &&
orphans[i].Position.Y < bounds.Y + bounds.Height) {
result.Add(orphans[i]);
orphans.RemoveAt(i);
}
}
containing = result;
}
This only yields a marginal improvement, not the 2x or 3x I'm looking for.
UPDATE 3 Again I abandoned the above approaches, and decided to let each object maintain its current cell:
private CollisionCell currCell;
internal CollisionCell CurrCell {
get {
return currCell;
}
set {
currCell = value;
}
}
This value gets updated:
// Run 1 cycle of this object
public virtual void Run()
{
position += velocity;
parent.CellManager.updateContainingCell(this);
}
CellManager code:
private IDictionary<Vector2, CollisionCell> cellCoords = new Dictionary<Vector2, CollisionCell>();
internal void updateContainingCell(GameObject gameObject) {
CollisionCell currCell = findContainingCell(gameObject);
gameObject.CurrCell = currCell;
if (currCell != null) {
currCell.Containing.Add(gameObject);
}
}
// null if no such cell exists
private CollisionCell findContainingCell(GameObject gameObject) {
if (gameObject.Position.X > GameEngine.GameWidth
|| gameObject.Position.X < 0
|| gameObject.Position.Y > GameEngine.GameHeight
|| gameObject.Position.Y < 0) {
return null;
}
// we'll need to be able to access these outside of the loops
uint minWidth = 0;
uint minHeight = 0;
for (minWidth = 0; minWidth + cellWidth < gameObject.Position.X; minWidth += cellWidth) ;
for (minHeight = 0; minHeight + cellHeight < gameObject.Position.Y; minHeight += cellHeight) ;
CollisionCell currCell = cellCoords[new Vector2(minWidth, minHeight)];
// Make sure `currCell` actually contains gameObject
Debug.Assert(gameObject.Position.X >= currCell.Bounds.X && gameObject.Position.X <= currCell.Bounds.Width + currCell.Bounds.X,
String.Format("{0} should be between lower bound {1} and upper bound {2}", gameObject.Position.X, currCell.Bounds.X, currCell.Bounds.X + currCell.Bounds.Width));
Debug.Assert(gameObject.Position.Y >= currCell.Bounds.Y && gameObject.Position.Y <= currCell.Bounds.Height + currCell.Bounds.Y,
String.Format("{0} should be between lower bound {1} and upper bound {2}", gameObject.Position.Y, currCell.Bounds.Y, currCell.Bounds.Y + currCell.Bounds.Height));
return currCell;
}
I thought this would make it better - now I only have to iterate over collidables, not all collidables * cells. Instead, the game is now hideously slow, delivering only 1/10th of its performance with my above approaches.
The profiler indicates that a different method is now the main hot spot, and the time to get neighbors for an object is trivially short. That method didn't change from before, so perhaps I'm calling it WAY more than I used to...
It spends 50% of its time in that function because you call that function a lot. Optimizing that one function will only yield incremental improvements to performance.
Alternatively, just call the function less!
You've already started down that path by setting up a spatial partitioning scheme (lookup Quadtrees to see a more advanced form of your technique).
A second approach is to break your N*N loop into an incremental form and to use a CPU budget.
You can allocate a CPU budget for each of the modules that want action during frame times (during Updates). Collision is one of these modules, AI might be another.
Let's say you want to run your game at 60 fps. This means you have about 1/60 s = 0.0167 s of CPU time to burn between frames. No we can split those 0.0167 s between our modules. Let's give collision 30% of the budget: 0.005 s.
Now your collision algorithm knows that it can only spend 0.005 s working. So if it runs out of time, it will need to postpone some tasks for later - you will make the algorithm incremental. Code for achieving this can be as simple as:
const double CollisionBudget = 0.005;
Collision[] _allPossibleCollisions;
int _lastCheckedCollision;
void HandleCollisions() {
var startTime = HighPerformanceCounter.Now;
if (_allPossibleCollisions == null ||
_lastCheckedCollision >= _allPossibleCollisions.Length) {
// Start a new series
_allPossibleCollisions = GenerateAllPossibleCollisions();
_lastCheckedCollision = 0;
}
for (var i=_lastCheckedCollision; i<_allPossibleCollisions.Length; i++) {
// Don't go over the budget
if (HighPerformanceCount.Now - startTime > CollisionBudget) {
break;
}
_lastCheckedCollision = i;
if (CheckCollision(_allPossibleCollisions[i])) {
HandleCollision(_allPossibleCollisions[i]);
}
}
}
There, now it doesn't matter how fast the collision code is, it will be done as quickly as is possible without affecting the user's perceived performance.
Benefits include:
The algorithm is designed to run out of time, it just resumes on the next frame, so you don't have to worry about this particular edge case.
CPU budgeting becomes more and more important as the number of advanced/time consuming algorithms increases. Think AI. So it's a good idea to implement such a system early on.
Human response time is less than 30 Hz, your frame loop is running at 60 Hz. That gives the algorithm 30 frames to complete its work, so it's OK that it doesn't finish its work.
Doing it this way gives stable, data-independent frame rates.
It still benefits from performance optimizations to the collision algorithm itself.
Collision algorithms are designed to track down the "sub frame" in which collisions happened. That is, you will never be so lucky as to catch a collision just as it happens - thinking you're doing so is lying to yourself.
I can help here; i wrote my own collision detection as an experiment. I think i can tell you right now that you won't get the performance you need without changing algorithms. Sure, the naive way is nice, but only works for so many items before collapsing. What you need is Sweep and prune. The basic idea goes like this (from my collision detection library project):
using System.Collections.Generic;
using AtomPhysics.Interfaces;
namespace AtomPhysics.Collisions
{
public class SweepAndPruneBroadPhase : IBroadPhaseCollider
{
private INarrowPhaseCollider _narrowPhase;
private AtomPhysicsSim _sim;
private List<Extent> _xAxisExtents = new List<Extent>();
private List<Extent> _yAxisExtents = new List<Extent>();
private Extent e1;
public SweepAndPruneBroadPhase(INarrowPhaseCollider narrowPhase)
{
_narrowPhase = narrowPhase;
}
public AtomPhysicsSim Sim
{
get { return _sim; }
set { _sim = null; }
}
public INarrowPhaseCollider NarrowPhase
{
get { return _narrowPhase; }
set { _narrowPhase = value; }
}
public bool NeedsNotification { get { return true; } }
public void Add(Nucleus nucleus)
{
Extent xStartExtent = new Extent(nucleus, ExtentType.Start);
Extent xEndExtent = new Extent(nucleus, ExtentType.End);
_xAxisExtents.Add(xStartExtent);
_xAxisExtents.Add(xEndExtent);
Extent yStartExtent = new Extent(nucleus, ExtentType.Start);
Extent yEndExtent = new Extent(nucleus, ExtentType.End);
_yAxisExtents.Add(yStartExtent);
_yAxisExtents.Add(yEndExtent);
}
public void Remove(Nucleus nucleus)
{
foreach (Extent e in _xAxisExtents)
{
if (e.Nucleus == nucleus)
{
_xAxisExtents.Remove(e);
}
}
foreach (Extent e in _yAxisExtents)
{
if (e.Nucleus == nucleus)
{
_yAxisExtents.Remove(e);
}
}
}
public void Update()
{
_xAxisExtents.InsertionSort(comparisonMethodX);
_yAxisExtents.InsertionSort(comparisonMethodY);
for (int i = 0; i < _xAxisExtents.Count; i++)
{
e1 = _xAxisExtents[i];
if (e1.Type == ExtentType.Start)
{
HashSet<Extent> potentialCollisionsX = new HashSet<Extent>();
for (int j = i + 1; j < _xAxisExtents.Count && _xAxisExtents[j].Nucleus.ID != e1.Nucleus.ID; j++)
{
potentialCollisionsX.Add(_xAxisExtents[j]);
}
HashSet<Extent> potentialCollisionsY = new HashSet<Extent>();
for (int j = i + 1; j < _yAxisExtents.Count && _yAxisExtents[j].Nucleus.ID != e1.Nucleus.ID; j++)
{
potentialCollisionsY.Add(_yAxisExtents[j]);
}
List<Extent> probableCollisions = new List<Extent>();
foreach (Extent e in potentialCollisionsX)
{
if (potentialCollisionsY.Contains(e) && !probableCollisions.Contains(e) && e.Nucleus.ID != e1.Nucleus.ID)
{
probableCollisions.Add(e);
}
}
foreach (Extent e2 in probableCollisions)
{
if (e1.Nucleus.DNCList.Contains(e2.Nucleus) || e2.Nucleus.DNCList.Contains(e1.Nucleus))
continue;
NarrowPhase.DoCollision(e1.Nucleus, e2.Nucleus);
}
}
}
}
private bool comparisonMethodX(Extent e1, Extent e2)
{
float e1PositionX = e1.Nucleus.NonLinearSpace != null ? e1.Nucleus.NonLinearPosition.X : e1.Nucleus.Position.X;
float e2PositionX = e2.Nucleus.NonLinearSpace != null ? e2.Nucleus.NonLinearPosition.X : e2.Nucleus.Position.X;
e1PositionX += (e1.Type == ExtentType.Start) ? -e1.Nucleus.Radius : e1.Nucleus.Radius;
e2PositionX += (e2.Type == ExtentType.Start) ? -e2.Nucleus.Radius : e2.Nucleus.Radius;
return e1PositionX < e2PositionX;
}
private bool comparisonMethodY(Extent e1, Extent e2)
{
float e1PositionY = e1.Nucleus.NonLinearSpace != null ? e1.Nucleus.NonLinearPosition.Y : e1.Nucleus.Position.Y;
float e2PositionY = e2.Nucleus.NonLinearSpace != null ? e2.Nucleus.NonLinearPosition.Y : e2.Nucleus.Position.Y;
e1PositionY += (e1.Type == ExtentType.Start) ? -e1.Nucleus.Radius : e1.Nucleus.Radius;
e2PositionY += (e2.Type == ExtentType.Start) ? -e2.Nucleus.Radius : e2.Nucleus.Radius;
return e1PositionY < e2PositionY;
}
private enum ExtentType { Start, End }
private sealed class Extent
{
private ExtentType _type;
public ExtentType Type
{
get
{
return _type;
}
set
{
_type = value;
_hashcode = 23;
_hashcode *= 17 + Nucleus.GetHashCode();
}
}
private Nucleus _nucleus;
public Nucleus Nucleus
{
get
{
return _nucleus;
}
set
{
_nucleus = value;
_hashcode = 23;
_hashcode *= 17 + Nucleus.GetHashCode();
}
}
private int _hashcode;
public Extent(Nucleus nucleus, ExtentType type)
{
Nucleus = nucleus;
Type = type;
_hashcode = 23;
_hashcode *= 17 + Nucleus.GetHashCode();
}
public override bool Equals(object obj)
{
return Equals(obj as Extent);
}
public bool Equals(Extent extent)
{
if (this.Nucleus == extent.Nucleus)
{
return true;
}
return false;
}
public override int GetHashCode()
{
return _hashcode;
}
}
}
}
and here's the code that does the insertion sort (more-or-less a direct translation of the pseudocode here):
/// <summary>
/// Performs an insertion sort on the list.
/// </summary>
/// <typeparam name="T">The type of the list supplied.</typeparam>
/// <param name="list">the list to sort.</param>
/// <param name="comparison">the method for comparison of two elements.</param>
/// <returns></returns>
public static void InsertionSort<T>(this IList<T> list, Func<T, T, bool> comparison)
{
for (int i = 2; i < list.Count; i++)
{
for (int j = i; j > 1 && comparison(list[j], list[j - 1]); j--)
{
T tempItem = list[j];
list.RemoveAt(j);
list.Insert(j - 1, tempItem);
}
}
}
IIRC, i was able to get an extremely large performance increase with that, especially when dealing with large numbers of colliding bodies. You'll need to adapt it for your code, but that's the basic premise behind sweep and prune.
The other thing i want to remind you is that you should use a profiler, like the one made by Red Gate. There's a free trial which should last you long enough.
It looks like you are looping through all the game objects just to see what objects are contained in a cell. It seems like a better approach would be to store the list of game objects that are in a cell for each cell. If you do that and each object knows what cells it is in, then moving objects between cells should be easy. This seems like it will yield the biggest performance gain.
Here is another optimization tip for determing what cells an object is in:
If you have already determined what cell(s) an object is in and know that based on the objects velocity it will not change cells for the current frame, there is no need to rerun the logic that determines what cells the object is in. You can do a quick check by creating a bounding box that contains all the cells that the object is in. You can then create a bounding box that is the size of the object + the velocity of the object for the current frame. If the cell bounding box contains the object + velocity bounding box, no further checks need to be done. If the object isn't moving, it's even easier and you can just use the object bounding box.
Let me know if that makes sense, or google / bing search for "Quad Tree", or if you don't mind using open source code, check out this awesome physics library: http://www.codeplex.com/FarseerPhysics
I'm in the exact same boat as you. I'm trying to create an overhead shooter and need to push efficiency to the max so I can have tons of bullets and enemies on screen at once.
I'd get all of my collidable objects in an array with a numbered index. This affords the opportunity to take advantage of an observation: if you iterate over the list fully for each item you'll be duplicating efforts. That is (and note, I'm making up variables names just to make it easier to spit out some pseudo-code)
if (objs[49].Intersects(objs[51]))
is equivalent to:
if (objs[51].Intersects(objs[49]))
So if you use a numbered index you can save some time by not duplicating efforts. Do this instead:
for (int i1 = 0; i1 < collidables.Count; i1++)
{
//By setting i2 = i1 + 1 you ensure an obj isn't checking collision with itself, and that objects already checked against i1 aren't checked again. For instance, collidables[4] doesn't need to check against collidables[0] again since this was checked earlier.
for (int i2 = i1 + 1; i2 < collidables.Count; i2++)
{
//Check collisions here
}
}
Also, I'd have each cell either have a count or a flag to determine if you even need to check for collisions. If a certain flag is set, or if the count is less than 2, than no need to check for collisions.
Just a heads up: Some people suggest farseer; which is a great 2D physics library for use with XNA. If you're in the market for a 3D physics engine for XNA, I've used bulletx (a c# port of bullet) in XNA projects to great effect.
Note: I have no affiliation to the bullet or bulletx projects.
An idea might be to use a bounding circle. Basically, when a Collidable is created, keep track of it's centre point and calculate a radius/diameter that contains the whole object. You can then do a first pass elimination using something like;
int r = C1.BoundingRadius + C2.BoundingRadius;
if( Math.Abs(C1.X - C2.X) > r && Math.Abs(C1.Y - C2.Y) > r )
/// Skip further checks...
This drops the comparisons to two for most objects, but how much this will gain you I'm not sure...profile!
There are a couple of things that could be done to speed up the process... but as far as I can see your method of checking for simple rectangular collision is just fine.
But I'd replace the check
if (obj.Position.X ....)
With
if (obj.Bounds.IntersercsWith(this.Bounds))
And I'd also replace the line
result.Add(new List<GameObject>(cell.Containing.ToArray()));
For
result.Add(new List<GameObject>(cell.Containing));
As the Containing property returns an ICollection<T> and that inherits the IEnumerable<T> that is accepted by the List<T> constructor.
And the method ToArray() simply iterates to the list returning an array, and this process is done again when creating the new list.
I know this Thread is old but i would say that the marked answar was completly wrong...
his code contain a fatal error and don´t give performance improvent´s it will take performence!
At first a little notic...
His code is created so that you have to call this code in your Draw methode but this is the wrong place for collision-detection. In your draw methode you should only draw nothing else!
But you can´t call HandleCollisions() in Update, because Update get a lots of more calls than Draw´s.
If you want call HandleCollisions() your code have to look like this... This code will prevent that your collision detection run more then once per frame.
private bool check = false;
protected override Update(GameTime gameTime)
{
if(!check)
{
check = true;
HandleCollisions();
}
}
protected override Draw(GameTime gameTime)
{
check = false;
}
Now let us take a look what´s wrong with HandleCollisions().
Example: We have 500 objects and we would do a check for every possible Collision without optimizing our detection.
With 500 object we should have 249500 collision checks (499X500 because we don´t want to check if an object collide with it´s self)
But with Frank´s code below we will lose 99.998% of your collosions (only 500 collision-checks will done). << THIS WILL INCREASE THE PERFORMENCES!
Why? Because _lastCheckedCollision will never be the same or greater then allPossibleCollisions.Length... and because of that you would only check the last index 499
for (var i=_lastCheckedCollision; i<_allPossibleCollisions.Length; i++)
_lastCheckedCollision = i;
//<< This could not be the same as _allPossibleCollisions.Length,
//because i have to be lower as _allPossibleCollisions.Length
you have to replace This
if (_allPossibleCollisions == null ||
_lastCheckedCollision >= _allPossibleCollisions.Length)
with this
if (_allPossibleCollisions == null ||
_lastCheckedCollision >= _allPossibleCollisions.Length - 1) {
so your whole code can be replaced by this.
private bool check = false;
protected override Update(GameTime gameTime)
{
if(!check)
{
check = true;
_allPossibleCollisions = GenerateAllPossibleCollisions();
for(int i=0; i < _allPossibleCollisions.Length; i++)
{
if (CheckCollision(_allPossibleCollisions[i]))
{
//Collision!
}
}
}
}
protected override Draw(GameTime gameTime)
{
check = false;
}
... this should be a lot of faster than your code ... and it works :D ...
RCIX answer should marked as correct because Frank´s answar is wrong.