Rust has millions and millions of grass, rocks, and trees instances across its maps, which get as big as 8km.
Is that grass placed dynamically around the player at runtime? If so, is that done on the GPU somehow or using a shader?
In my game, we use vertex colors and raycasting to place vegetation, and we store transform data which gets initialized with indirect GPU instancing at runtime.
However, I can't imagine this would scale well with something like trees. Are there really thousands of mesh colliders active in the scene at all times?
I thought perhaps they might store all those meshcolliders in the scene, and the gameobject could be tagged, and if you hit it with a tool, it adds a "Tree" component to it.
Am I headed in the right direction with a "spawn everything before hand, instance it at runtime" approach?
I've tested this and it actually worked, spawning 24 million instances (took 20 minutes to raycast), and then initializing the GPU with the instances.
This is cool and all, even though it lead my Unity editor to crash after a little while (memory leak?).
Maybe you store the instances before runtime, and then when you start the decicated server you do all the raycasting and place all the trees, rocks, and other interactive objects.
But I am worried that if I tried to store even 10000 gameobjects (for interaction, stuff like choppable trees, mineable rocks) that performance would tank.
You can actually see this for yourself in the code: https://github.com/Facepunch/Rust.World/blob/master/Assets/Scripts/WorldExample.cs
The game loads the instances in memory or can stream the instances as well, but you wouldn't ever have the prefabs all instanced at the same time in GO's as culling / chunks are needed to keep performance (and not crashing your program).
There's many tricks, lowering your draw calls, billboarding, culling, object pooling, chunking, just to name a few.
A suggestion if you're interested in systems like this is to look into ECS for unity as it offers a lighter way of instancing data: https://docs.unity3d.com/Packages/com.unity.entities#0.17/manual/index.html
That way you can have thousands of instanced objects for a fraction of what Game Objects would offer in size and performance.
I also don't want to get too deep into generating objects as that's not the main question, but to generate a map using raycasts is extremely slow as you now know, you should look into noise generation to create biomes and quickly spawn your instance, also maps really need a set place for objects you're just adding data that a user needs to load in. Another option is to place objects that serve as visual objects arbitrarily around a viewing distance so they only exist in memory, if someone runs past a generic grass pile, they will never care if it's moved the next time the run past it.
Related
I'm creating an infinite runner game in Unity and to avoid floating point precision the player stays in one place while all of the other environmental objects move when the player inputs something. It works amazing, but it's meant to be a mobile game and I want to save performance everywhere I can.
Currently, I have a script that takes the players input and translates it into force vectors that are then applied to the game objects. But since I can have a lot of visual elements that are doing this, would it be better to remove the script from each game object, have one copy of it in the scene, and then have the script apply the force vectors to all the visual game objects? This is how I see it.
(Calculate physics once -> apply to objects) > (Calculate physics 40 times)
I'm new to unity, so I don't really know if this would make a difference in performance, but it makes sense in my head that calculating physics once is better than 100 times.
Thanks in advance!
would it be better to remove the script from each game object, have one copy of it in the scene, and then have the script apply the force vectors to all the visual game objects?
That's usually better, but when it comes to optimization, it's all about testing.
I'd suggest you learn about Unity's built-in profiler, and actually see how much of an impact the physics has on the performance, and then compare those results with other methods of achieving the same thing, and then choose whichever method you think it's better.
Just remember that every device will have different performances, even on the same device.
would it be better to remove the script from each game object, have
one copy of it in the scene, and then have the script apply the force
vectors to all the visual game objects?
Yes it would be better in most cases.
From 10000 Update() calls Unity article:
Unity goes over all Behaviours (Scripts) to update them. Special
iterator class, SafeIterator, ensures that nothing breaks if someone
decides to delete the next item on the list. Just iterating over all
registered Behaviours (Scripts) takes 15%.
Hello I am making a 2D game in c#, using Unity2D.
Game is basically tower stacking game where player is given random objects from array which they need to stack on top of each other. Objects spawns at the top of the screen with Body Type Kinematic and player can move it only on the x axis and when player lets go object, its Body Type changes to Dynamic and it starts to fall and lands on start platform or tower.
My problem is that when this new object lands on existing tower or start platform it does not land smoothly, it goes in the other object and sometimes bounces and that often tips over the tower.
And when these objects stay on top of each other, they are vibrating and causing the tower to tip over.
Is there any way of making objects be stable and not go in each other when landing?
Thanks in advance.
What you're describing is a well-known problem and limitation with all physics engines.
It's called stacking stability, and it always becomes a problem, if you have large enough amount of physics objects.
There isn't really a single simple solution, it's a combination of choosing the right physics engine, setting your object's properties correctly, and even putting in some custom code of your own that tries to circumvent some of the issues when possible (for example by disabling physics on elements that are too deep down on the stack, so we consider them "stable", until some point when the situation changes).
I recommend reading (and watching) this, and from there, exploring wherever the links will take you.
And as I said, keep in mind that this is ...a common problem, and sometimes hard to solve. In some complex cases (yours shouldn't be one of them, though), even impossible to solve.
(EDIT two days after the answer has been accepted, I should have written this right away, sorry) :
Oh, and I forgot (sorry about that, but hopefully you've already found out from some of the resources you read) the problem of the objects going into each other can also be influenced by some settings (max depenetration depth i think it's called in unity). Another thing that might help is outright custom code which will raycast from the object that's falling downwards, and when it detects a block at a certain distance (you'll probably want to check for distance about the size that the object can travel in 1 to 10 frames or so), it would momentarily disable the physics on the falling one, position it exactly on top of the block below, zero out the velocity, and then re-enable the physics on it. This would avoid the penetration problems, as well as the instability and vibrations that happen when the new block impacts the tower. If you, however, want the effect of that impact instability to be present in some way, you can then still manually add a physics force on the impacted block(s), and it will have the upside that you can set the size of that force yourself (instead of it being calculated from the block), meaning you'll have much more control about how significant that impact effect is, which could be very useful for game balance, since that effect is quite important for the difficulty of the game, and in general, you'll want to have this level of control over things that affect gameplay difficulty ;)
Ok, I know the easiest way to make (pretty high poly) grass in Unity 3d is just randomly spawn some grass prefabs across an area - I need my grass/vegetation to grow like this: https://dribbble.com/shots/4726918-Time-lapse-Air-Plant
But, with lots of polys and, as this was created using an armature (bones drag on performance as well), I need to find the MOST EFFICIENT way to make a forest of growing vegetation without lowering the FPS. I'm targeting iOS/Android, and already seeing low FPS on high poly count models.
Keeping in mind that each blade/vegetation needs to animate to grow (ideally these would all be randomly varied somehow) - what is best for mobile performance? Would spawning randomly even with each model having an armature not cause a problem?
What's the common route here?
Basically, I am trying to optimize a game for Unity for mobile devices. Therefore, it is imperative to contain CPU usage. As this is a complex game, many of the scripts reference each other (and their GameObjects). Currently, I am using GameObject.FindGameObjectWithTag(Tag tag) to reference other GameObjects, Components, and Scripts. I am also aware that this can be done with public variables by using the drag-and-drop in the editor. But as I know which GameObject will be dropped into each level, I found the first option to be simpler to use as the drag-and-drop led to errors many times and was tedious to use. However, that will not be a problem, and I feel performance of one or the other outweighs these fall backs. I was wondering if there was a difference in terms of performance for these two approaches, and which one would be better suited for a high performance, mobile, aspect.
FindGameObjectWithTag() has a complexity O(n) in the worst case scenario. You will not have performance issue if you have small amount of objects in your scene and small amount of objects searching for objects with tags.
Another approach is by serializing these objects in the script and using them. With this approach you decrease CPU usage and increase memory usage since no meta objects exist in the scene or not you allocate memory for them.
I suggest a third approach if it is possible for you. Search for Singleton pattern on Google and YouTube. It allows you to have a reference to that object if it's 1 all the time, without every time trying to find it with FindGameObjectsWithTag()... resulting in very small CPU and memory usage.
In my experience your best bet is to create a public variable, if it isn't assigned when you need it, use your method of FindGameObjectWithTag or however you finding it before, store it in that public variable so you only have to do the lookup once. As long as you don't do the FindGameObjectWithTag every frame the lookup hit shouldn't be too bad.
Using the Unity3D engine. I'm making a multiplayer game for fun, using Unity's standard networking. If servers hold 25-50 players, what map size is recommended? How big can I make a very detailed map before it is too big for effective gameplay? How do I optimize a large map? Any ideas and advice would be great, I just want to learn and could not find anything about this on google :D
*My map is sliced into different parts.
The size of the map itself, in units, doesn't matter for performance at all. Just keep in mind that Unity (as any other game engine) uses floats for geometry, and when the float values get too high or too low, things can get funny.
What matters is the amount of data that your logic, networking and rendering engine have to churn through. These are different things, even logic/networking data, and limits on those greatly depend on architecture of your game.
Lets' talk about networking. There, two parameters are critical as your limits: bandwidth and latency. Bandwidth is how much data can you transfer, and latency is how fast. Ok, this explanation is confusing. Imagine a truck full of HDDs travelling from one city to another: it has gigantic bandwidth, and you can transfer entire data centers this way. But the latency, time for the signal to travel, is a few hours. On the other hand, two different people from these cities can hop on air balloons, look at each other in the night sky and turn their flashlights on and off. This way, they'll be able to exchange just one bit of information, but with the lowest possible latency: you can't get faster than light.
It also depends on how your networking works. RTS games, for example, often use lock-step multiplayer architecture can operate on thousands of units, but will only exchange a limited amount of data between users: their input commands. A first-person shooter, on the other hand, heavily relies on latency (which lock-stepping can damage): 10 ms when you jump and fire a rocket launcher are much more important than when you say your troops to attack. So, the networking logic is organised differently: every player's computer predicts what will happen, but the central server has authority on what actually happened. Of course, what I'm writing right now are just general examples of architectures that can be used; choosing the right way to do the networking is very difficult, but very interesting and creative task.
Now, logic itself. Actually, most of the gameplay logic used in modern games is relatively simple in terms of cpu requirements, unless it's physics or AI. Using physics in a multiplayer game is tricky enough on it's own, because of synchronisation problems (remember floats?); usually, the actual logic that can influence who wins and who loses is pretty simplified: level geometry is completely static, characters move using easy logic without real physical force, and the physics is usually limited just to collision detection. Of course, you see a lot of physical-based visual stuff: ragdolls of killed enemies falling down, rubble from explosion flying up; but these are typically de-synchronised between different computers and can't actually affect the gameplay itself.
And finally, rendering. Here, a lot of different constraints make place. To tell about them all, I would have to describe the whole rendering pipeline of Unity on different devices, and this is clearly out of scope of this question. Thankfully, there's another way! Instead of reasoning about this limit theoretically, just to a practical prototype. Put in different game assets in the scene, run it on target device and look how it performs. Adjust, repeat! These game assets can be completely ugly or irrelevant; however, they have to have the same technical properties as what you're going to use in the real game: number of polygons, sizes of textures, shaders, etc, etc. Let's say, for example, that you want to create a COD-like multiplayer shooter. To come up with your rendering requirements, just put in N environment models with N polygons each, using NxN textures, put in N characters with some skeleton animations with N bones, and also don't forget some fake logic that would emulate CPU-intenstive stuff so your perfomance measuring will be more realistic. Of course, it won't give you a final picture, but it'll be a good way to start, and it's great to do that before you start producing a lot of art assets.
Overall, game perfomance optimisation is a very broad and interesting theme, and it's impossible to give a precise answer to such a question.
You can improved this reducing the clipping plane of your camera to reduce the visible render distance, and too can use LOD improvement making your sliced part with minor details.
Check this link for more detail about LOD:
http://docs.unity3d.com/Manual/class-LODGroup.html
If you need more improvement you can make a script to load terrain in runtime based on a distance arround your player.
First and foremost: Make the gameplay work, optimize it later. Premature optimization is a waste of programmer's time.
Secondly: think of Skyrim and Minecraft. The world is separated into pieces that are loaded in background when you move around. Using that approach (chunking your world into pieces) you can have virtually infinite world size.