Performance issue looping through big tilemap in unity 2D C# - c#

(solution is at the end)
I have a tilemap with tiles that can spread to the adjacent tiles based on data I store in a dictionary by position.
While the check for that is done in no time at all it does take considerable resources resulting in a lag everytime I call the function.
Currently I'm making sure the code doesn't get executed too fast by actually waiting a little each time we finished a loop-iteration. While I'm okay with my code not being executed as fast as it could, this just isn't the right way to go about it.
So basically: Is there a way to limit the execution of a single script/function in unity/c# to not use more than a certain percentage of the games resources (or something to that effect)? Or maybe there's a way to increase the functions performance significantly I just can't find?
Thanks in advance!
Solution <-----------
Thanks to the great advice from Michael Urvan and akaBase I was able to vastly increase performance by using chunks. I wasn't able to implement all the suggested improvements, so check out (and upvote) their respective answers for even more performance.
How: I reset a bool every couple of seconds. When that bool is set, I loop through part of the necessary tiles (chunk) in the Update()-method and remember the last index for the next Update()-call. When I reach the end of the tile-list I'm done and can set the bool to false and reset the index.
Here's a simplified version of my finished code:
{
[SerializeField] private Tilemap tilemap;
[SerializeField] private List<TileType> tileTypes;
[SerializeField] private List<TileType> growthTileTypes;
private Dictionary<Vector3Int, TileData> tileDataByPosition;
private List<TileData> spreadingTileDatas;
private Dictionary<TileBase, TileType> tileTypeByTile;
private bool spreading;
private spreadIndex = 0;
private int chunkSize = 10;
private void Awake()
{
// set up stuff here
InvokeRepeating("ResetSpreading", 10, 10);
}
private void Update()
{
if (spreading == true)
{
Spread();
}
}
private void Spread()
{
for (
int index = spreadIndex;
index < spreadingTileDatas.Count && index < spreadIndex + chunkSize;
index++
)
{
// do stuff here
}
spreadIndex += chunkSize;
if (spreadIndex >= spreadingTileDatas.Count)
{
spreadIndex = 0;
spreading = false;
}
}
}

To answer your: "Is there a way to limit the execution of a single script/function in unity/c# to not use more than a certain percentage of the games resources (or something to that effect)?" - What you want to do is have this loop only perform a fraction of the total items per loop and call it more often. You can process it in chunks or you can use time measurement as I noted at the end of this post.
I didn't examine your code specifically to see exactly what is being done, but this is a common way of splitting up the processing over time rather than how you were adding a tiny WaitForSeconds() within a Coroutine. The coroutine with such a tiny WaitForSeconds is probably terrible for performance and simply using Update() (which is called every frame) and splitting processing the way I describe will yield much better performance. If you google Coroutine performance you can find information and benchmarks on that.
You will create a variable to keep track of the current position of your index, and a variable that is the maximum number of indexes to process per frame - you will break out of your loop when the total # of items processed hits the max. You also will not do the loop from 0 to X - it will be from currentIndex to the end and then loop back around by setting it to 0. You will just break out when you hit max each time.
To summarize:
Remove IEnumerator from Spread() and make it a regular function, and instead of foreach(position in tilePosition) use for(int i=0;) etc
Add a variable like MaxToProcessPerFrame
Add another variable like currentIndex
Add a third variable like totalProcessedThisFrame that is set to zero each time you enter Spread()
for instance if (totalProcessedThisFrame++ >= MaxToProcessPerFrame) break; in your loop
add if (currentIndex >= tilePositions.Count) currentIndex = 0 so that it loops back around to the beginning
I also noticed that Spread() is calling UpdateTiles() at the end, which then does another StartCoroutine(Spread()) which will cause it to run twice and i'm sure is inefficient in some way since it will kind of double process possibly per iteration.
A likely good place to call UpdateTiles() would be on the if (currentIndex >= tilePositions.Count) since that would be at the end of one iteration of updating the whole list.
Once you have that setup, you can test different values for the MaxToProcessPerFrame, start with a high number and then lower it until it seems to run smoothly and it will break up the processing over multiple frames.
A second method in addition to breaking it up by a total # of items to be processed is to use a System.Diagnostics Stopwatch() and break out of your loop when it exceeds a total amount of milliseconds of processing.
These two methods will help it process smoothly once you play with different amounts.
And lastly, your method of calling StartCoroutine() constantly is also not performant. You should start a coroutine once or the least number of times needed, and then loop within it. It's common to use a while (Application.isRunning) {} loop as a basis for most Coroutines in Unity.
also for reference:
Unity 2017 Game Optimization: Optimize all aspects of Unity performance
By Chris Dickinson

I suggest keeping the data in a Multidimensional Array and call the specific area of the tilemap you need.
Example pseudo code
TileData[,] tileMapDatas;
void Start()
{
int rows = 5;
int columns = 5;
tileMapDatas = new TileData[rows, columns];
for(int r = 0; r < rows, r++)
{
for(int c = 0; c < columns, c++)
{
tileMapDatas[r, c] = new TileData();//Change to how you create them
}
}
}
void Spread(Tile tile)
{
if(tile.CanSpread)
{
//Check the mapdata for the tile above, (add check for array bounds)
if(tileMapDatas[tile.Row + 1, tile.Column].CanChange)
{
//Change
}
//Repeat for other directions
}
}
Using this method is a lot more efficient as you only need to check the neighbours of the targeted tile.

Related

How to improve GameObject-Array causing Lag

In my game I always have about 500 objects that are active at the same time.
I have also a GameObject[]-Array that stores positions for the active objects. All other positions in the Array are set to null.
Here' an example:
I have a cube at position 5,5,0.
My GameObject Array then looks like: cubes[5,5,0] = cube;
The game runs fine when i set the maximum of the Array to something like [100,10,100]. But if i increase this to something like [200,10,200] it starts to lag.
Is there a way to improve the array so that i can save more positions (with still a maximum of 500 objects active at the same time!)
Code:
GameObject[] cubePrefab;
GameObject[,,] cubes;
bool[,,] world;
int worldX = 100;
int worldY = 5;
int worldZ = 100;
void Start()
{
world = new bool[worldX, worldY, worldZ];
cubes = new GameObject[worldX, worldY, worldZ];
}
void Update()
{
for (var i = playerXMin; i < playerXMax; i++)
for (var i2 = playerYMin; i2 < playerYMax; i2++)
for (var i3 = playerZMin; i3 < playerZMax; i3++)
if (world[i, i2, i3] && cubes[i, i2, i3] == null)
cubes[i, i2, i3] = Instantiate(cubePrefab);
}
At the start, about 500 random positions of world get set to true.
playerXMin = player.transform.position.x - 10;
playerXMax = player.transform.position.x + 10;
and so on.
So, based off the comments, you have a very sparse array. A 100 * 10 * 100 array can store 100000 objects in it, and if you are iterating it, you still have to go over all the other 99500 nulls in it even if there are just 500 items actually in it.
I would recommend you using a more appropriate data type which allows for sparse allocations, for example, a Dictionary. Since Dictionarys are generic, and you are evitendtly trying to store some objects based off a 3D point (not necessarily in space, but still, 3 coordinates) you're looking for something like Dictionary<Vector3Int, GameObject>. I recommend Vector3Int as a key since it only deals in integers, so you'll able to avoid all the shenanigans related to floating points.
With a sparse structure, you can avoid looping through non-existent positions, which will greatly reduce the time it takes to iterate it in such an extreme case. Do note that sparse structures are usually technically slower, if you were to fill all 100000 positions. But since you aren't nearly even touching that amount, the reduction in iteration time will help you a lot.

Performance of Queue? Expecting to have heavy RAM usage but instead I get heavy CPU usage

Let's say I am receiving a signal at a variable rate fluctuating between 50 and 200 times per second. I want to store the timestamp of each signal I received into a queue, so I can remove it from the queue when the signal was received more than 1 week ago.
public Queue<long> myQueue = new Queue<long>();
public OnSignalReceived()
{
myQueue.Enqueue(DateTime.UtcNow.Ticks);
PurgeOldSignals();
}
public void PurgeOldSignals()
{
while (myQueue.Count > 0 && myQueue.Peek() < DateTime.UtcNow.AddDays(-7).Ticks)
{
myQueue.Dequeue();
}
}
Is there a more efficient way to do this? This is my implementation and I was expecting to take advantage of using a lot of memory (because let's say an average of 100 signals per second, it means the queue will hold about 60 millions items(!) before starting to purge items) in exchange of having computational performance because of the O(1) time to Enqueue() and Dequeue().
After testing however, I noticed that the bottleneck is the CPU and not the RAM. In fact, the RAM barely gets eaten up, but the CPU usage never ceases to increase. Here is the result after about 16 hours of running (clearly far away from my 7 days objective)
Any suggestions to optimize this?
EDIT 1:
In fact, the whole purpose of this is just to know at any time how many signals I got in the last week (precise to the actual second). Maybe there is a better way to achieve this?
For given task I would make circular queue of 3600*24*7 integers. Every integer would mean number of events in that second (for every second in one week). It would only need few megabytes. On measured event the integer corresponding to actual second (=now) would increment. It would be convenient to have sum of all items in the array and just update it on change to get it fast.
public class History
{
protected int eventCount = 0;
protected int[] array;
protected readonly int _intervalLength_ms;
long actualTime = 0;
int actIndex = 0;
public History(int intervalLength_ms, int numberOfIntervals)
{
_intervalLength_ms = intervalLength_ms;
array = new int[numberOfIntervals];
}
public int EventCount
{
get
{
Update();
return eventCount;
}
}
public void InsertEvent()
{
Update();
array[actIndex]++;
eventCount++;
}
protected void Update()
{
long newTime = DateTime.Now.Ticks / 10000 / _intervalLength_ms;
while (newTime > actualTime && eventCount > 0)
{
actualTime++;
actIndex++;
if (actIndex >= array.Length)
{
actIndex = 0;
}
eventCount -= array[actIndex];
array[actIndex] = 0;
}
if (newTime > actualTime)
{
actualTime = newTime;
actIndex = (int)(actualTime % array.Length);
}
}
}
It would be constructed with parameters new History(1000, 3600*24*7).
I see two issues:
(minor) but DateTime.Now is considerably more expensive then DateTime.UTCNow, so you might want to bring that out of the loop instead of doing it 60 million times.
(major) you are looping through millions of items every time you get a signal. If you only want to purge it by 7 days old signals, you should only run your purge once a day.
I suspect the reason this is slow is because of this
If Count already equals the capacity, the capacity of the Queue is
increased by automatically reallocating the internal array, and the
existing elements are copied to the new array before the new element
is added.
If Count is less than the capacity of the internal array,
this method is an O(1) operation. If the internal array needs to be
reallocated to accommodate the new element, this method becomes an
O(n) operation, where n is Count.
From the MSDN documentation on Queue<T>.Enqueue() here. The CPU usage is increasing proprtional to n because you are performing an O(n) operation on myQueue.
The solution then is to allocate as much memory as you want this program to use immediately, by calling var myQueue = new Queue<long>(n); and then your code will make the desired change and switch to high memory usage instead of CPU usage.

How can i make random float numbers but to check that there will be no same numbers?

for(int i = 0; i < gos.Length; i++)
{
float randomspeed = (float)Math.Round (UnityEngine.Random.Range (1.0f, 15.0f));
floats.Add (randomspeed);
_animator [i].SetFloat ("Speed", randomspeed);
}
Now what i get is only round numbers between 1 and 15. I mean i'm not getting numbers like 1.0 or 5.4 or 9.8 or 14.5 is it logical to have speed values like this ? If so how can i make that the random numbers will include also floats ?
Second how can i make sure that there will be no the same numbers ?
gos Length is 15
As noted in the other answer, you aren't getting fractional values, because you call Math.Round(), which has the express purpose of rounding to the nearest whole number (when called the way you do).
As for preventing duplicates, I question the need to ensure against duplicates. First, the number of possible values within the range you're selecting is large enough that the chances of getting duplicates is very small. Second, it appears you are selecting random speeds for some game object, and it seems to me that in that scenario, it's entirely plausible that once in a while you would find a pair of game objects with the same speed.
That said, if you still want to do that, I would advise against the linear searches recommended by the other answers. Game logic should be reasonably efficient, and in this scenario that would mean using a hash set. For example:
HashSet<float> values = new HashSet<float>();
while (values.Count < gos.Length)
{
float randomSpeed = UnityEngine.Random.Range(1.0f, 15.0f);
// The Add() method returns "true" if the value _wasn't_ already in the set
if (values.Add(randomSpeed))
{
_animator[values.Count - 1].SetFloat("Speed, randomSpeed);
}
}
// it's not clear from your question whether you really need the list of
// floats at the end, but if you do, this is a way to convert the hash set
// to a list
floats = values.ToList();
The reason you're not getting any decimals is because you're using Math.Round, this will either raise the float to the next whole number or lower it.
As for if it's logical, it depends.As for your case animation speed is usually done by floats because it can smoothly speed up and down.
Also to answer your question on how to avoid duplicates of the same float.. which in itself is already very unlikely, try doing this instead :
for(int i = 0; i < gos.Length; i++)
{
float randomspeed = 0f;
// Keep repeating this until we find an unique randomspeed.
while(randomspeed == 0f || floats.Contains(randomspeed))
{
// Use this is you want round numbers
//randomspeed = Mathf.Round(Random.Range(1.0f, 15.0f));
randomspeed = Random.Range(1.0f, 15.0f);
}
floats.Add (randomspeed);
_animator [i].SetFloat ("Speed", randomspeed);
}
Your first problem: if you use Math.Round(), you'll never get numbers like 5.4...
Second question: you can check for existance of the number before you add the number:
private float GenerateRandomSpeed()
{ return (float)UnityEngine.Random.Range (1.0f, 15.0f);}
for(int i = 0; i < gos.Length; i++)
{
float randomspeed= GenerateRandomSpeed();
while (floats.any(x=>x==randomspeed))
randomspeed=GenerateRandomSpeed();
floats.Add (randomspeed);
_animator [i].SetFloat ("Speed", randomspeed);
}
I didn't test it but i hope it can direct you to the answer.

Call method when specific division of integer is reached

I am programming a video game, and in it, I would like to call a method that adds a player bonus life for every 2,000 points scored. I have no idea what operator(s) to use for such a thing.
If (score is divisible by 2000, each increment){
DoSomething();
}
I'm not even sure if I'm asking this question correctly. Basically when the player scores 2,000pts, 4,000pts, 6,000pts, etc, I want to give him/her a bonus life by calling a method. I already have the method created; I was just wondering how I can apply the conditions that call it.
I tried using this:
public int bonusTarget = 2000;
paddle = GameObject.Find("Paddle").GetComponent<Paddle>();
if(score >= bonusTarget){
paddle.Bonus();
bonusTarget += 2000;
}
but, it awarded more than one bonus life each increment. I need to award the bonus life only one time for each 2,000pts
"score is divisible by 2000"
if (score % 2000 == 0) DoSomething()
If You need to track score use property instead of varilable eg.:
private int _score;
public int Score
{
get
{
return _score;
}
set
{
var a = Math.Floor(_score / 2000)
var b = Math.Floor(value / 2000)
if (a < b) DoSomething();
_score = value;
}
}
Put the if(score is divisible by 2000) check inside the DoSomething() method.
There's no reason to keep this outside the method that increases the bonus lives, in the dozens of possible code paths that increase the score. You need ONE CENTRAL location that the score increases, and checks various conditions.
You need a centralized location to add to the score, like this:
public void AddToScore(int points)
{
score += points;
if (score >= bonusTarget)
{
bonusTarget += 2000;
paddle.Bonus();
}
}
I switched the two lines inside the conditional block in case paddle.Bonus() tries adding points itself, which could cause issues, possibly even infinite recursion.
If you make it so you always add to the score using this method, you can add any special handling you want. You might also want to write a ResetScore() and a GetScore() so you never access score directly elsewhere in your code.

Deleting from array, mirrored (strange) behavior

The title may seem a little odd, because I have no idea how to describe this in one sentence.
For the course Algorithms we have to micro-optimize some stuff, one is finding out how deleting from an array works. The assignment is delete something from an array and re-align the contents so that there are no gaps, I think it is quite similar to how std::vector::erase works from c++.
Because I like the idea of understanding everything low-level, I went a little further and tried to bench my solutions. This presented some weird results.
At first, here is a little code that I used:
class Test {
Stopwatch sw;
Obj[] objs;
public Test() {
this.sw = new Stopwatch();
this.objs = new Obj[1000000];
// Fill objs
for (int i = 0; i < objs.Length; i++) {
objs[i] = new Obj(i);
}
}
public void test() {
// Time deletion
sw.Restart();
deleteValue(400000, objs);
sw.Stop();
// Show timings
Console.WriteLine(sw.Elapsed);
}
// Delete function
// value is the to-search-for item in the list of objects
private static void deleteValue(int value, Obj[] list) {
for (int i = 0; i < list.Length; i++) {
if (list[i].Value == value) {
for (int j = i; j < list.Length - 1; j++) {
list[j] = list[j + 1];
//if (list[j + 1] == null) {
// break;
//}
}
list[list.Length - 1] = null;
break;
}
}
}
}
I would just create this class and call the test() method. I did this in a loop for 25 times.
My findings:
The first round it takes a lot longer than the other 24, I think this is because of caching, but I am not sure.
When I use a value that is in the start of the list, it has to move more items in memory than when I use a value at the end, though it still seems to take less time.
Benchtimes differ quite a bit.
When I enable the commented if, performance goes up (10-20%) even if the value I search for is almost at the end of the list (which means the if goes off a lot of times without actually being useful).
I have no idea why these things happen, is there someone who can explain (some of) them? And maybe if someone sees this who is a pro at this, where can I find more info to do this the most efficient way?
Edit after testing:
I did some testing and found some interesting results. I run the test on an array with a size of a million items, filled with a million objects. I run that 25 times and report the cumulative time in milliseconds. I do that 10 times and take the average of that as a final value.
When I run the test with my function described just above here I get a score of:
362,1
When I run it with the answer of dbc I get a score of:
846,4
So mine was faster, but then I started to experiment with a half empty empty array and things started to get weird. To get rid of the inevitable nullPointerExceptions I added an extra check to the if (thinking it would ruin a bit more of the performance) like so:
if (fromItem != null && fromItem.Value != value)
list[to++] = fromItem;
This seemed to not only work, but improve performance dramatically! Now I get a score of:
247,9
The weird thing is, the scores seem to low to be true, but sometimes spike, this is the set I took the avg from:
94, 26, 966, 36, 632, 95, 47, 35, 109, 439
So the extra evaluation seems to improve my performance, despite of doing an extra check. How is this possible?
You are using Stopwatch to time your method. This calculates the total clock time taken during your method call, which could include the time required for .Net to initially JIT your method, interruptions for garbage collection, or slowdowns caused by system loads from other processes. Noise from these sources will likely dominate noise due to cache misses.
This answer gives some suggestions as to how you can minimize some of the noise from garbage collection or other processes. To eliminate JIT noise, you should call your method once without timing it -- or show the time taken by the first call in a separate column in your results table since it will be so different. You might also consider using a proper profiler which will report exactly how much time your code used exclusive of "noise" from other threads or processes.
Finally, I'll note that your algorithm to remove matching items from an array and shift everything else down uses a nested loop, which is not necessary and will access items in the array after the matching index twice. The standard algorithm looks like this:
public static void RemoveFromArray(this Obj[] array, int value)
{
int to = 0;
for (int from = 0; from < array.Length; from++)
{
var fromItem = array[from];
if (fromItem.Value != value)
array[to++] = fromItem;
}
for (; to < array.Length; to++)
{
array[to] = default(Obj);
}
}
However, instead of using the standard algorithm you might experiment by using Array.RemoveAt() with your version, since (I believe) internally it does the removal in unmanaged code.

Categories

Resources