C# XNA: Optimizing Collision Detection? - c#

I'm working on a simple demo for collision detection, which contains only a bunch of objects bouncing around in the window. (The goal is to see how many objects the game can handle at once without dropping frames.)
There is gravity, so the objects are either moving or else colliding with a wall.
The naive solution was O(n^2):
foreach Collidable c1:
foreach Collidable c2:
checkCollision(c1, c2);
This is pretty bad. So I set up CollisionCell objects, which maintain information about a portion of the screen. The idea is that each Collidable only needs to check for the other objects in its cell. With 60 px by 60 px cells, this yields almost a 10x improvement, but I'd like to push it further.
A profiler has revealed that the the code spends 50% of its time in the function each cell uses to get its contents. Here it is:
// all the objects in this cell
public ICollection<GameObject> Containing
{
get
{
ICollection<GameObject> containing = new HashSet<GameObject>();
foreach (GameObject obj in engine.GameObjects) {
// 20% of processor time spent in this conditional
if (obj.Position.X >= bounds.X &&
obj.Position.X < bounds.X + bounds.Width &&
obj.Position.Y >= bounds.Y &&
obj.Position.Y < bounds.Y + bounds.Height) {
containing.Add(obj);
}
}
return containing;
}
}
Of that 20% of the program's time is spent in that conditional.
Here is where the above function gets called:
// Get a list of lists of cell contents
List<List<GameObject>> cellContentsSet = cellManager.getCellContents();
// foreach item, only check items in the same cell
foreach (List<GameObject> cellMembers in cellContentsSet) {
foreach (GameObject item in cellMembers) {
// process collisions
}
}
//...
// Gets a list of list of cell contents (each sub list = 1 cell)
internal List<List<GameObject>> getCellContents() {
List<List<GameObject>> result = new List<List<GameObject>>();
foreach (CollisionCell cell in cellSet) {
result.Add(new List<GameObject>(cell.Containing.ToArray()));
}
return result;
}
Right now, I have to iterate over every cell - even empty ones. Perhaps this could be improved on somehow, but I'm not sure how to verify that a cell is empty without looking at it somehow. (Maybe I could implement something like sleeping objects, in some physics engines, where if an object will be still for a while it goes to sleep and is not included in calculations for every frame.)
What can I do to optimize this? (Also, I'm new to C# - are there any other glaring stylistic errors?)
When the game starts lagging out, the objects tend to be packed fairly tightly, so there's not that much motion going on. Perhaps I can take advantage of this somehow, writing a function to see if, given an object's current velocity, it can possibly leave its current cell before the next call to Update()
UPDATE 1 I decided to maintain a list of the objects that were found to be in the cell at last update, and check those first to see if they were still in the cell. Also, I maintained an area of the CollisionCell variable, when when the cell was filled I could stop looking. Here is my implementation of that, and it made the whole demo much slower:
// all the objects in this cell
private ICollection<GameObject> prevContaining;
private ICollection<GameObject> containing;
internal ICollection<GameObject> Containing {
get {
return containing;
}
}
/**
* To ensure that `containing` and `prevContaining` are up to date, this MUST be called once per Update() loop in which it is used.
* What is a good way to enforce this?
*/
public void updateContaining()
{
ICollection<GameObject> result = new HashSet<GameObject>();
uint area = checked((uint) bounds.Width * (uint) bounds.Height); // the area of this cell
// first, try to fill up this cell with objects that were in it previously
ICollection<GameObject>[] toSearch = new ICollection<GameObject>[] { prevContaining, engine.GameObjects };
foreach (ICollection<GameObject> potentiallyContained in toSearch) {
if (area > 0) { // redundant, but faster?
foreach (GameObject obj in potentiallyContained) {
if (obj.Position.X >= bounds.X &&
obj.Position.X < bounds.X + bounds.Width &&
obj.Position.Y >= bounds.Y &&
obj.Position.Y < bounds.Y + bounds.Height) {
result.Add(obj);
area -= checked((uint) Math.Pow(obj.Radius, 2)); // assuming objects are square
if (area <= 0) {
break;
}
}
}
}
}
prevContaining = containing;
containing = result;
}
UPDATE 2 I abandoned that last approach. Now I'm trying to maintain a pool of collidables (orphans), and remove objects from them when I find a cell that contains them:
internal List<List<GameObject>> getCellContents() {
List<GameObject> orphans = new List<GameObject>(engine.GameObjects);
List<List<GameObject>> result = new List<List<GameObject>>();
foreach (CollisionCell cell in cellSet) {
cell.updateContaining(ref orphans); // this call will alter orphans!
result.Add(new List<GameObject>(cell.Containing));
if (orphans.Count == 0) {
break;
}
}
return result;
}
// `orphans` is a list of GameObjects that do not yet have a cell
public void updateContaining(ref List<GameObject> orphans) {
ICollection<GameObject> result = new HashSet<GameObject>();
for (int i = 0; i < orphans.Count; i++) {
// 20% of processor time spent in this conditional
if (orphans[i].Position.X >= bounds.X &&
orphans[i].Position.X < bounds.X + bounds.Width &&
orphans[i].Position.Y >= bounds.Y &&
orphans[i].Position.Y < bounds.Y + bounds.Height) {
result.Add(orphans[i]);
orphans.RemoveAt(i);
}
}
containing = result;
}
This only yields a marginal improvement, not the 2x or 3x I'm looking for.
UPDATE 3 Again I abandoned the above approaches, and decided to let each object maintain its current cell:
private CollisionCell currCell;
internal CollisionCell CurrCell {
get {
return currCell;
}
set {
currCell = value;
}
}
This value gets updated:
// Run 1 cycle of this object
public virtual void Run()
{
position += velocity;
parent.CellManager.updateContainingCell(this);
}
CellManager code:
private IDictionary<Vector2, CollisionCell> cellCoords = new Dictionary<Vector2, CollisionCell>();
internal void updateContainingCell(GameObject gameObject) {
CollisionCell currCell = findContainingCell(gameObject);
gameObject.CurrCell = currCell;
if (currCell != null) {
currCell.Containing.Add(gameObject);
}
}
// null if no such cell exists
private CollisionCell findContainingCell(GameObject gameObject) {
if (gameObject.Position.X > GameEngine.GameWidth
|| gameObject.Position.X < 0
|| gameObject.Position.Y > GameEngine.GameHeight
|| gameObject.Position.Y < 0) {
return null;
}
// we'll need to be able to access these outside of the loops
uint minWidth = 0;
uint minHeight = 0;
for (minWidth = 0; minWidth + cellWidth < gameObject.Position.X; minWidth += cellWidth) ;
for (minHeight = 0; minHeight + cellHeight < gameObject.Position.Y; minHeight += cellHeight) ;
CollisionCell currCell = cellCoords[new Vector2(minWidth, minHeight)];
// Make sure `currCell` actually contains gameObject
Debug.Assert(gameObject.Position.X >= currCell.Bounds.X && gameObject.Position.X <= currCell.Bounds.Width + currCell.Bounds.X,
String.Format("{0} should be between lower bound {1} and upper bound {2}", gameObject.Position.X, currCell.Bounds.X, currCell.Bounds.X + currCell.Bounds.Width));
Debug.Assert(gameObject.Position.Y >= currCell.Bounds.Y && gameObject.Position.Y <= currCell.Bounds.Height + currCell.Bounds.Y,
String.Format("{0} should be between lower bound {1} and upper bound {2}", gameObject.Position.Y, currCell.Bounds.Y, currCell.Bounds.Y + currCell.Bounds.Height));
return currCell;
}
I thought this would make it better - now I only have to iterate over collidables, not all collidables * cells. Instead, the game is now hideously slow, delivering only 1/10th of its performance with my above approaches.
The profiler indicates that a different method is now the main hot spot, and the time to get neighbors for an object is trivially short. That method didn't change from before, so perhaps I'm calling it WAY more than I used to...

It spends 50% of its time in that function because you call that function a lot. Optimizing that one function will only yield incremental improvements to performance.
Alternatively, just call the function less!
You've already started down that path by setting up a spatial partitioning scheme (lookup Quadtrees to see a more advanced form of your technique).
A second approach is to break your N*N loop into an incremental form and to use a CPU budget.
You can allocate a CPU budget for each of the modules that want action during frame times (during Updates). Collision is one of these modules, AI might be another.
Let's say you want to run your game at 60 fps. This means you have about 1/60 s = 0.0167 s of CPU time to burn between frames. No we can split those 0.0167 s between our modules. Let's give collision 30% of the budget: 0.005 s.
Now your collision algorithm knows that it can only spend 0.005 s working. So if it runs out of time, it will need to postpone some tasks for later - you will make the algorithm incremental. Code for achieving this can be as simple as:
const double CollisionBudget = 0.005;
Collision[] _allPossibleCollisions;
int _lastCheckedCollision;
void HandleCollisions() {
var startTime = HighPerformanceCounter.Now;
if (_allPossibleCollisions == null ||
_lastCheckedCollision >= _allPossibleCollisions.Length) {
// Start a new series
_allPossibleCollisions = GenerateAllPossibleCollisions();
_lastCheckedCollision = 0;
}
for (var i=_lastCheckedCollision; i<_allPossibleCollisions.Length; i++) {
// Don't go over the budget
if (HighPerformanceCount.Now - startTime > CollisionBudget) {
break;
}
_lastCheckedCollision = i;
if (CheckCollision(_allPossibleCollisions[i])) {
HandleCollision(_allPossibleCollisions[i]);
}
}
}
There, now it doesn't matter how fast the collision code is, it will be done as quickly as is possible without affecting the user's perceived performance.
Benefits include:
The algorithm is designed to run out of time, it just resumes on the next frame, so you don't have to worry about this particular edge case.
CPU budgeting becomes more and more important as the number of advanced/time consuming algorithms increases. Think AI. So it's a good idea to implement such a system early on.
Human response time is less than 30 Hz, your frame loop is running at 60 Hz. That gives the algorithm 30 frames to complete its work, so it's OK that it doesn't finish its work.
Doing it this way gives stable, data-independent frame rates.
It still benefits from performance optimizations to the collision algorithm itself.
Collision algorithms are designed to track down the "sub frame" in which collisions happened. That is, you will never be so lucky as to catch a collision just as it happens - thinking you're doing so is lying to yourself.

I can help here; i wrote my own collision detection as an experiment. I think i can tell you right now that you won't get the performance you need without changing algorithms. Sure, the naive way is nice, but only works for so many items before collapsing. What you need is Sweep and prune. The basic idea goes like this (from my collision detection library project):
using System.Collections.Generic;
using AtomPhysics.Interfaces;
namespace AtomPhysics.Collisions
{
public class SweepAndPruneBroadPhase : IBroadPhaseCollider
{
private INarrowPhaseCollider _narrowPhase;
private AtomPhysicsSim _sim;
private List<Extent> _xAxisExtents = new List<Extent>();
private List<Extent> _yAxisExtents = new List<Extent>();
private Extent e1;
public SweepAndPruneBroadPhase(INarrowPhaseCollider narrowPhase)
{
_narrowPhase = narrowPhase;
}
public AtomPhysicsSim Sim
{
get { return _sim; }
set { _sim = null; }
}
public INarrowPhaseCollider NarrowPhase
{
get { return _narrowPhase; }
set { _narrowPhase = value; }
}
public bool NeedsNotification { get { return true; } }
public void Add(Nucleus nucleus)
{
Extent xStartExtent = new Extent(nucleus, ExtentType.Start);
Extent xEndExtent = new Extent(nucleus, ExtentType.End);
_xAxisExtents.Add(xStartExtent);
_xAxisExtents.Add(xEndExtent);
Extent yStartExtent = new Extent(nucleus, ExtentType.Start);
Extent yEndExtent = new Extent(nucleus, ExtentType.End);
_yAxisExtents.Add(yStartExtent);
_yAxisExtents.Add(yEndExtent);
}
public void Remove(Nucleus nucleus)
{
foreach (Extent e in _xAxisExtents)
{
if (e.Nucleus == nucleus)
{
_xAxisExtents.Remove(e);
}
}
foreach (Extent e in _yAxisExtents)
{
if (e.Nucleus == nucleus)
{
_yAxisExtents.Remove(e);
}
}
}
public void Update()
{
_xAxisExtents.InsertionSort(comparisonMethodX);
_yAxisExtents.InsertionSort(comparisonMethodY);
for (int i = 0; i < _xAxisExtents.Count; i++)
{
e1 = _xAxisExtents[i];
if (e1.Type == ExtentType.Start)
{
HashSet<Extent> potentialCollisionsX = new HashSet<Extent>();
for (int j = i + 1; j < _xAxisExtents.Count && _xAxisExtents[j].Nucleus.ID != e1.Nucleus.ID; j++)
{
potentialCollisionsX.Add(_xAxisExtents[j]);
}
HashSet<Extent> potentialCollisionsY = new HashSet<Extent>();
for (int j = i + 1; j < _yAxisExtents.Count && _yAxisExtents[j].Nucleus.ID != e1.Nucleus.ID; j++)
{
potentialCollisionsY.Add(_yAxisExtents[j]);
}
List<Extent> probableCollisions = new List<Extent>();
foreach (Extent e in potentialCollisionsX)
{
if (potentialCollisionsY.Contains(e) && !probableCollisions.Contains(e) && e.Nucleus.ID != e1.Nucleus.ID)
{
probableCollisions.Add(e);
}
}
foreach (Extent e2 in probableCollisions)
{
if (e1.Nucleus.DNCList.Contains(e2.Nucleus) || e2.Nucleus.DNCList.Contains(e1.Nucleus))
continue;
NarrowPhase.DoCollision(e1.Nucleus, e2.Nucleus);
}
}
}
}
private bool comparisonMethodX(Extent e1, Extent e2)
{
float e1PositionX = e1.Nucleus.NonLinearSpace != null ? e1.Nucleus.NonLinearPosition.X : e1.Nucleus.Position.X;
float e2PositionX = e2.Nucleus.NonLinearSpace != null ? e2.Nucleus.NonLinearPosition.X : e2.Nucleus.Position.X;
e1PositionX += (e1.Type == ExtentType.Start) ? -e1.Nucleus.Radius : e1.Nucleus.Radius;
e2PositionX += (e2.Type == ExtentType.Start) ? -e2.Nucleus.Radius : e2.Nucleus.Radius;
return e1PositionX < e2PositionX;
}
private bool comparisonMethodY(Extent e1, Extent e2)
{
float e1PositionY = e1.Nucleus.NonLinearSpace != null ? e1.Nucleus.NonLinearPosition.Y : e1.Nucleus.Position.Y;
float e2PositionY = e2.Nucleus.NonLinearSpace != null ? e2.Nucleus.NonLinearPosition.Y : e2.Nucleus.Position.Y;
e1PositionY += (e1.Type == ExtentType.Start) ? -e1.Nucleus.Radius : e1.Nucleus.Radius;
e2PositionY += (e2.Type == ExtentType.Start) ? -e2.Nucleus.Radius : e2.Nucleus.Radius;
return e1PositionY < e2PositionY;
}
private enum ExtentType { Start, End }
private sealed class Extent
{
private ExtentType _type;
public ExtentType Type
{
get
{
return _type;
}
set
{
_type = value;
_hashcode = 23;
_hashcode *= 17 + Nucleus.GetHashCode();
}
}
private Nucleus _nucleus;
public Nucleus Nucleus
{
get
{
return _nucleus;
}
set
{
_nucleus = value;
_hashcode = 23;
_hashcode *= 17 + Nucleus.GetHashCode();
}
}
private int _hashcode;
public Extent(Nucleus nucleus, ExtentType type)
{
Nucleus = nucleus;
Type = type;
_hashcode = 23;
_hashcode *= 17 + Nucleus.GetHashCode();
}
public override bool Equals(object obj)
{
return Equals(obj as Extent);
}
public bool Equals(Extent extent)
{
if (this.Nucleus == extent.Nucleus)
{
return true;
}
return false;
}
public override int GetHashCode()
{
return _hashcode;
}
}
}
}
and here's the code that does the insertion sort (more-or-less a direct translation of the pseudocode here):
/// <summary>
/// Performs an insertion sort on the list.
/// </summary>
/// <typeparam name="T">The type of the list supplied.</typeparam>
/// <param name="list">the list to sort.</param>
/// <param name="comparison">the method for comparison of two elements.</param>
/// <returns></returns>
public static void InsertionSort<T>(this IList<T> list, Func<T, T, bool> comparison)
{
for (int i = 2; i < list.Count; i++)
{
for (int j = i; j > 1 && comparison(list[j], list[j - 1]); j--)
{
T tempItem = list[j];
list.RemoveAt(j);
list.Insert(j - 1, tempItem);
}
}
}
IIRC, i was able to get an extremely large performance increase with that, especially when dealing with large numbers of colliding bodies. You'll need to adapt it for your code, but that's the basic premise behind sweep and prune.
The other thing i want to remind you is that you should use a profiler, like the one made by Red Gate. There's a free trial which should last you long enough.

It looks like you are looping through all the game objects just to see what objects are contained in a cell. It seems like a better approach would be to store the list of game objects that are in a cell for each cell. If you do that and each object knows what cells it is in, then moving objects between cells should be easy. This seems like it will yield the biggest performance gain.
Here is another optimization tip for determing what cells an object is in:
If you have already determined what cell(s) an object is in and know that based on the objects velocity it will not change cells for the current frame, there is no need to rerun the logic that determines what cells the object is in. You can do a quick check by creating a bounding box that contains all the cells that the object is in. You can then create a bounding box that is the size of the object + the velocity of the object for the current frame. If the cell bounding box contains the object + velocity bounding box, no further checks need to be done. If the object isn't moving, it's even easier and you can just use the object bounding box.
Let me know if that makes sense, or google / bing search for "Quad Tree", or if you don't mind using open source code, check out this awesome physics library: http://www.codeplex.com/FarseerPhysics

I'm in the exact same boat as you. I'm trying to create an overhead shooter and need to push efficiency to the max so I can have tons of bullets and enemies on screen at once.
I'd get all of my collidable objects in an array with a numbered index. This affords the opportunity to take advantage of an observation: if you iterate over the list fully for each item you'll be duplicating efforts. That is (and note, I'm making up variables names just to make it easier to spit out some pseudo-code)
if (objs[49].Intersects(objs[51]))
is equivalent to:
if (objs[51].Intersects(objs[49]))
So if you use a numbered index you can save some time by not duplicating efforts. Do this instead:
for (int i1 = 0; i1 < collidables.Count; i1++)
{
//By setting i2 = i1 + 1 you ensure an obj isn't checking collision with itself, and that objects already checked against i1 aren't checked again. For instance, collidables[4] doesn't need to check against collidables[0] again since this was checked earlier.
for (int i2 = i1 + 1; i2 < collidables.Count; i2++)
{
//Check collisions here
}
}
Also, I'd have each cell either have a count or a flag to determine if you even need to check for collisions. If a certain flag is set, or if the count is less than 2, than no need to check for collisions.

Just a heads up: Some people suggest farseer; which is a great 2D physics library for use with XNA. If you're in the market for a 3D physics engine for XNA, I've used bulletx (a c# port of bullet) in XNA projects to great effect.
Note: I have no affiliation to the bullet or bulletx projects.

An idea might be to use a bounding circle. Basically, when a Collidable is created, keep track of it's centre point and calculate a radius/diameter that contains the whole object. You can then do a first pass elimination using something like;
int r = C1.BoundingRadius + C2.BoundingRadius;
if( Math.Abs(C1.X - C2.X) > r && Math.Abs(C1.Y - C2.Y) > r )
/// Skip further checks...
This drops the comparisons to two for most objects, but how much this will gain you I'm not sure...profile!

There are a couple of things that could be done to speed up the process... but as far as I can see your method of checking for simple rectangular collision is just fine.
But I'd replace the check
if (obj.Position.X ....)
With
if (obj.Bounds.IntersercsWith(this.Bounds))
And I'd also replace the line
result.Add(new List<GameObject>(cell.Containing.ToArray()));
For
result.Add(new List<GameObject>(cell.Containing));
As the Containing property returns an ICollection<T> and that inherits the IEnumerable<T> that is accepted by the List<T> constructor.
And the method ToArray() simply iterates to the list returning an array, and this process is done again when creating the new list.

I know this Thread is old but i would say that the marked answar was completly wrong...
his code contain a fatal error and don´t give performance improvent´s it will take performence!
At first a little notic...
His code is created so that you have to call this code in your Draw methode but this is the wrong place for collision-detection. In your draw methode you should only draw nothing else!
But you can´t call HandleCollisions() in Update, because Update get a lots of more calls than Draw´s.
If you want call HandleCollisions() your code have to look like this... This code will prevent that your collision detection run more then once per frame.
private bool check = false;
protected override Update(GameTime gameTime)
{
if(!check)
{
check = true;
HandleCollisions();
}
}
protected override Draw(GameTime gameTime)
{
check = false;
}
Now let us take a look what´s wrong with HandleCollisions().
Example: We have 500 objects and we would do a check for every possible Collision without optimizing our detection.
With 500 object we should have 249500 collision checks (499X500 because we don´t want to check if an object collide with it´s self)
But with Frank´s code below we will lose 99.998% of your collosions (only 500 collision-checks will done). << THIS WILL INCREASE THE PERFORMENCES!
Why? Because _lastCheckedCollision will never be the same or greater then allPossibleCollisions.Length... and because of that you would only check the last index 499
for (var i=_lastCheckedCollision; i<_allPossibleCollisions.Length; i++)
_lastCheckedCollision = i;
//<< This could not be the same as _allPossibleCollisions.Length,
//because i have to be lower as _allPossibleCollisions.Length
you have to replace This
if (_allPossibleCollisions == null ||
_lastCheckedCollision >= _allPossibleCollisions.Length)
with this
if (_allPossibleCollisions == null ||
_lastCheckedCollision >= _allPossibleCollisions.Length - 1) {
so your whole code can be replaced by this.
private bool check = false;
protected override Update(GameTime gameTime)
{
if(!check)
{
check = true;
_allPossibleCollisions = GenerateAllPossibleCollisions();
for(int i=0; i < _allPossibleCollisions.Length; i++)
{
if (CheckCollision(_allPossibleCollisions[i]))
{
//Collision!
}
}
}
}
protected override Draw(GameTime gameTime)
{
check = false;
}
... this should be a lot of faster than your code ... and it works :D ...
RCIX answer should marked as correct because Frank´s answar is wrong.

Related

PUN 2 Getting and Setting Custom Properties

I'm still relatively new to Photon since the depreciation of UNet. I'm having trouble getting and setting local custom properties. I'm trying to have two different teams (players and angels) be chosen. Each player starts as a spectator. A certain percentage of players are chosen to be the angels, and the rest are assigned as players. I can manage to get and set the property of a randomly chosen player, but I can't seem to assign the values for the remaining. The snippet of code is below.
private IEnumerator TeamBalance()
{
angelCount = Mathf.Floor(PhotonNetwork.PlayerList.Length * angelPercent);
currentAngels = angelCount;
for (int i = 0; i < angelCount;)
{
int index = Random.Range(0, PhotonNetwork.PlayerList.Length);
if (PhotonNetwork.PlayerList[index].CustomProperties["team"].ToString() == "spectator")
{
PhotonNetwork.PlayerList[index].CustomProperties["team"] = "angel";
i++;
}
}
foreach (var player in PhotonNetwork.PlayerList)
{
if (player.CustomProperties["team"].ToString() == "spectator")
{
player.CustomProperties["team"] = "player";
}
}
yield return null;
}
The end result for 3 players ends up picking 1 angel, but with 2 spectators still remaining.
You need to use Player.SetCustomProperties function to set properties instead of assigning them directly. This allows PUN to track what’s been changed and update properly.
https://doc-api.photonengine.com/en/pun/v2/class_photon_1_1_realtime_1_1_player.html#a0c1010eda4f775ff56f8c86b026be41e

Consuming a number into descending groups

I always struggle with these types of algorithms. I have a scenario where I have a cubic value for freight and need to split this value into cartons of different sizes, there are 3 sizes available in this instance, 0.12m3, 0.09m3 and 0.05m3. A few examples;
Assume total m3 is 0.16m3, I need to consume this value into the appropriate cartons.
I will have 1 carton of 0.12m3, this leave 0.04m3 to consume. This fits into the 0.05m3 so therefore I will have 1 carton of 0.05m, consumption is now complete. Final answer is 1 x 0.12m3 and 1 x 0.05m3.
Assume total m3 is 0.32m3, I would end up with 2 x 0.12m3 and 1 x 0.09m3.
I would prefer something either in c# or SQL that would easily return to me the results.
Many thanks for any help.
Cheers
I wrote an algorithm that may be a little messy but I do think it works. Your problem statement isn't 100% unambiguous, so this solution is assuming you want to pick containers so that you minimize the remaining space, when started filling from the largest container.
// List of cartons
var cartons = new List<double>
{
0.12,
0.09,
0.05
};
// Amount of stuff that you want to put into cartons
var stuff = 0.32;
var distribution = new Dictionary<double, int>();
// For this algorithm, I want to sort by descending first.
cartons = cartons.OrderByDescending(x => x).ToList();
foreach (var carton in cartons)
{
var count = 0;
while (stuff >= 0)
{
if (stuff >= carton)
{
// If the amount of stuff bigger than the carton size, we use carton size, then update stuff
count++;
stuff = stuff - carton;
distribution.CreateNewOrUpdateExisting(carton, 1);
}
else
{
// Otherwise, among remaining cartons we pick the ones that will have empty space if the remaining stuff is put in
var partial = cartons.Where(x => x - stuff >= 0 && x != carton);
if (partial != null && partial.Count() > 0)
{
var min = partial.Min();
if (min > 0)
{
distribution.CreateNewOrUpdateExisting(min, 1);
stuff = stuff - min;
}
}
else
{
break;
}
}
}
There' an accompanying extension method, which either adds an item to a dictionary, or if the Key exists, then increments the Value.
public static class DictionaryExtensions
{
public static void CreateNewOrUpdateExisting(this IDictionary<double, int> map, double key, int value)
{
if (map.ContainsKey(key))
{
map[key]++;
}
else
{
map.Add(key, value);
}
}
}
EDIT
Found a bug in the case where initial stuff is smaller than the largest container, so code updated to fix it.
NOTE
This may still not be a 100% foolproof algorithm as I haven't tested extensively. But it should give you an idea on how to proceed.
EDIT EDIT
Changing the condition to while (stuff > 0) should fix the bug mentioned in the comments.

How to binary search a list of Panels

I have a list of panels, sorted by their y-values. You can see my question from earlier about the specifics of why this is structured this way. Long story short, this List has the highest panel at position 0, the one below it at position 1, etc, down to the last one at the last position. I am accessing the y-coordinate of each panel using this line of code adapted from my linked question:
Panel p = panelList[someIndex];
int panelHeight = p.Top + p.Parent.Top - p.Parent.Margin.Top;
//The above line guarantees that the first panel (index 0) has y-coordinate 0 when scrolled all the way up,
//and becomes negative as the user scrolls down.
//the second panel starts with a positive y-coordinate, but grows negative after the user scrolls past the top of that page
//and so on...
I need to find the index of the panel closest to height 0, so I know which panels are currently on, or very near being on, the page. Therefore, I am trying to use the List.BinarySearch() method, which is where I'm stuck. I'm hoping to take advantage of the BinarySearch's property of returning the index that the value would be at if it did exist in the list. That way I can just search for the panel at height 0 (which I don't expect to find), but find the element nearest it (say at y=24 or y=-5 something), and know that that is the panel being rendered at the moment.
Binary Search lets you specify an IComparer to define the < and > operations, so I wrote this class:
class PanelLocationComparer : IComparer<Panel>
{
public int Compare(Panel x, Panel y)
{
//start by checking all the cases for invalid input
if (x == null && y == null) { return 0; }
else if (x == null && y != null) { return -1; }
else if (x != null && y == null) { return 1; }
else//both values are defined, compare their y values
{
int xHeight = x.Top + x.Parent.Top - x.Parent.Margin.Top;
int yHeight = y.Top + y.Parent.Top - y.Parent.Margin.Top;
if (xHeight > yHeight)
{
return 1;
}
else if (xHeight < yHeight)
{
return -1;
}
else
{
return 0;
}
}
}
}
That doesn't work, and I'm realizing now that it is because comparing two panels for greater than or less than doesn't actually care about what value I'm searching for, in this case y value = 0. Is there a way to implement this in a IComparer, or is there a way to even do this type of search using the built-in BinarySearch?
I considered just making a new List of the same length as my Panel list every time, copying the y-values into it, and then searching through this list of ints for 0, but creating, searching, and destroying that list every time they scroll will hurt performance so much that it defeats the point of the binary search.
My question is also related to this one, but I couldn't figure out how to adapt it because they ultimately use a built-in comparison method, which I don't have access to in this situation.
Unfortunately the built-in BinarySearch methods cannot handle such scenario. All they can do is to search for list item or something that can be extracted from the list item. Someimes they can be used with a fake item and appropriate comparer, but this is not applicable here.
From the other side, binary search is quite simple algorithm, so you can easily create one for your specific case, or better, create a custom extension method in order to not repeat yourself the next time you need something like this:
public static class Algorithms
{
public static int BinarySearch<TSource, TValue>(this IReadOnlyList<TSource> source, TValue value, Func<TSource, TValue> valueSelector, IComparer<TValue> valueComparer = null)
{
return source.BinarySearch(0, source.Count, value, valueSelector, valueComparer);
}
public static int BinarySearch<TSource, TValue>(this IReadOnlyList<TSource> source, int start, int count, TValue value, Func<TSource, TValue> valueSelector, IComparer<TValue> valueComparer = null)
{
if (valueComparer == null) valueComparer = Comparer<TValue>.Default;
int lo = start, hi = lo + count - 1;
while (lo <= hi)
{
int mid = lo + (hi - lo) / 2;
int compare = valueComparer.Compare(value, valueSelector(source[mid]));
if (compare < 0) hi = mid - 1;
else if (compare > 0) lo = mid + 1;
else return mid;
}
return ~lo; // Same behavior as the built-in methods
}
}
and then use simply:
int index = panelList.BinarySearch(0, p => p.Top + p.Parent.Top - p.Parent.Margin.Top);

How Can I Reduce Lines with Using Switch-Case Properly

I'm working on a Blackjack game project. I have a helper()method for helping user for their acts. For example:
dealer's up card is: 8
player's hand total is: 16
Player is not sure, should he hit or stay. helper() function takes action in here.
It's basically counts the number of good cards on deck (playerTotal + goodcard <= 21)
So I'm thinking about to do it in this way (pseudo code)
public void helper() {
remain = 21 - playerTotal;
if (remain == 1) {
for (int i = 0; i < deck.last(); i++) {
switch (deck[i]) {
case A: numOfGood += 1
default: numOfBad +=1
}
}
}
else if (remain == 2) {
for (....) {
switch (deck[i]) {
case A: numOfGood += 1
case 2: numOfGood += 1
default: numOfBad +=1
}
}
}
//goes like this
}
I need to build a switch-case and for loop for all cards(A,2,3,4,5,6,7,8,9,J,K,Q,K) but it seems like a huge mess. How can I reduce the number of lines by doing something different?
First write a GetValue method that can compute the (minimum) numeric value for a card. You can implement it with a switch or however else you want:
public static int GetValue(char card)
{
//...
}
Once you have that the implementation of your method becomes far shorter and simpler:
foreach(var card in deck)
if(GetValue(card) <= remain)
numOfGood++;
else
numOfBad++;
Also note that you could just count the number of good or bad cards, and use the total remaining cards to compute the other, if needed.
var oddsOfSuccessfulHit = deck.Count(card => GetValue(card) <= remain) /
(double) deck.Count;
You could use a HashSet, its probably a little more efficient to use a switch but if you want to save lines ...
var goodCards = new HashSet<char>(new[] { 'A', '2' });
then something like,
var numOfGood = deck.Count(card => goodCards.Contains(card));
var numOfBad = deck.Count - numOfGood;
Alternatively since the logic of card values cannot change, there is no need to code it - just store it as data.
struct CardEffect
{
public string CardGlyph;
public int MinValue;
public int MaxValue;
}
... load from XML file or some other location and load into ...
public Dictionary<string, CardEffect> cardValues;
Then use the logic Servy has suggested.

How do I keep a list of only the last n objects?

I want to do some performance measuring of a particular method, but I'd like to average the time it takes to complete. (This is a C# Winforms application, but this question could well apply to other frameworks.)
I have a Stopwatch which I reset at the start of the method and stop at the end. I'd like to store the last 10 values in a list or array. Each new value added should push the oldest value off the list.
Periodically I will call another method which will average all stored values.
Am I correct in thinking that this construct is a circular buffer?
How can I create such a buffer with optimal performance? Right now I have the following:
List<long> PerfTimes = new List<long>(10);
// ...
private void DoStuff()
{
MyStopWatch.Restart();
// ...
MyStopWatch.Stop();
PerfTimes.Add(MyStopWatch.ElapsedMilliseconds);
if (PerfTimes.Count > 10) PerfTimes.RemoveAt(0);
}
This seems inefficient somehow, but perhaps it's not.
Suggestions?
You could create a custom collection:
class SlidingBuffer<T> : IEnumerable<T>
{
private readonly Queue<T> _queue;
private readonly int _maxCount;
public SlidingBuffer(int maxCount)
{
_maxCount = maxCount;
_queue = new Queue<T>(maxCount);
}
public void Add(T item)
{
if (_queue.Count == _maxCount)
_queue.Dequeue();
_queue.Enqueue(item);
}
public IEnumerator<T> GetEnumerator()
{
return _queue.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}
Your current solution works, but it's inefficient, because removing the first item of a List<T> is expensive.
private int ct = 0;
private long[] times = new long[10];
void DoStuff ()
{
...
times[ct] = MyStopWatch.ElapsedMilliseconds;
ct = (ct + 1) % times.Length; // Wrap back around to 0 when we reach the end.
}
Here is a simple circular structure.
This requires none of the array copying or garbage collection of linked list nodes that the other solutions have.
For optimal performance, you can probably just use an array of longs rather than a list.
We had a similar requirement at one point to implement a download time estimator, and we used a circular buffer to store the speed over each of the last N seconds.
We weren't interested in how fast the download was over the entire time, just roughly how long it was expected to take based on recent activity but not so recent that the figures would be jumping all over the place (such as if we just used the last second to calculate it).
The reason we weren't interested in the entire time frame was that a download could so 1M/s for half an hour then switch up to 10M/s for the next ten minutes. That first half hour will drag down the average speed quite severely, despite the fact that you're now downloading quite fast.
We created a circular buffer with each cell holding the amount downloaded in a 1-second period. The circular buffer size was 300, allowing for 5 minutes of historical data, and every cell was initialised to zero. In your case, you would only need ten cells.
We also maintained a total (the sum of all entries in the buffer, so also initially zero) and the count (initially zero, obviously).
Every second, we would figure out how much data had been downloaded since the last second and then:
subtract the current cell from the total.
put the current figure into that cell and advance the cell pointer.
add that current figure to the total.
increase the count if it wasn't already 300.
update the figure displayed to the user, based on total / count.
Basically, in pseudo-code:
def init (sz):
buffer = new int[sz]
for i = 0 to sz - 1:
buffer[i] = 0
total = 0
count = 0
index = 0
maxsz = sz
def update (kbps):
total = total - buffer[index] + kbps # Adjust sum based on deleted/inserted values.
buffer[index] = kbps # Insert new value.
index = (index + 1) % maxsz # Update pointer.
if count < maxsz: # Update count.
count = count + 1
return total / count # Return average.
That should be easily adaptable to your own requirements. The sum is a nice feature to "cache" information which may make your code even faster. By that I mean: if you need to work out the sum or average, you can work it out only when the data changes, and using the minimal necessary calculations.
The alternative would be a function which added up all ten numbers when requested, something that would be slower than the single subtract/add when loading another value into the buffer.
You may want to look at using the Queue data structure instead. You could use a simple linear List, but it is wholly inefficient. A circular array could be used but then you must resize it constantly. Therefore, I suggest you go with the Queue.
I needed to keep 5 last scores in a array and I came up with this simple solution.
Hope it will help some one.
void UpdateScoreRecords(int _latestScore){
latestScore = _latestScore;
for (int cnt = 0; cnt < scoreRecords.Length; cnt++) {
if (cnt == scoreRecords.Length - 1) {
scoreRecords [cnt] = latestScore;
} else {
scoreRecords [cnt] = scoreRecords [cnt+1];
}
}
}
Seems okay to me. What about using a LinkedList instead? When using a List, if you remove the first item, all of the other items have to be bumped back one item. With a LinkedList, you can add or remove items anywhere in the list at very little cost. However, I don't know how much difference this would make, since we're only talking about ten items.
The trade-off of a linked list is that you can't efficiently access random elements of the list, because the linked list must essentially "walk" along the list, passing each item, until it gets to the one you need. But for sequential access, linked lists are fine.
For java, it could be that way
import java.util.Iterator;
import java.util.LinkedList;
import java.util.Queue;
public class SlidingBuffer<T> implements Iterable<T>{
private Queue<T> _queue;
private int _maxCount;
public SlidingBuffer(int maxCount) {
_maxCount = maxCount;
_queue = new LinkedList<T>();
}
public void Add(T item) {
if (_queue.size() == _maxCount)
_queue.remove();
_queue.add(item);
}
public Queue<T> getQueue() {
return _queue;
}
public Iterator<T> iterator() {
return _queue.iterator();
}
}
It could be started that way
public class ListT {
public static void main(String[] args) {
start();
}
private static void start() {
SlidingBuffer<String> sb = new SlidingBuffer<>(5);
sb.Add("Array1");
sb.Add("Array2");
sb.Add("Array3");
sb.Add("Array4");
sb.Add("Array5");
sb.Add("Array6");
sb.Add("Array7");
sb.Add("Array8");
sb.Add("Array9");
//Test printout
for (String s: sb) {
System.out.println(s);
}
}
}
The result is
Array5
Array6
Array7
Array8
Array9
Years after the latest answer I stumbled on this questions while looking for the same solution. I ended with a combination of the above answers especially the one of: cycling by agent-j and of using a queue by Thomas Levesque
public class SlidingBuffer<T> : IEnumerable<T>
{
protected T[] items;
protected int index = -1;
protected bool hasCycled = false;
public SlidingBuffer(int windowSize)
{
items = new T[windowSize];
}
public void Add(T item)
{
index++;
if (index >= items.Length) {
hasCycled = true;
index %= items.Length;
}
items[index] = item;
}
public IEnumerator<T> GetEnumerator()
{
if (index == -1)
yield break;
for (int i = index; i > -1; i--)
{
yield return items[i];
}
if (hasCycled)
{
for (int i = items.Length-1; i > index; i--)
{
yield return items[i];
}
}
}
IEnumerator IEnumerable.GetEnumerator() => GetEnumerator();
}
I had to forego the very elegant one-liner of j-agent: ct = (ct + 1) % times.Length;
because I needed to detect when we circled back (through hasCycled) to have a well behaving enumerator. Note that the enumerator returns values from most-recent to oldest value.

Categories

Resources