I'm making a game in Unity3D and am currently creating an AI for the enemy. The enemy needs to walk around and search for the player without running into walls.
The enemy always moves forward on it's local z-axis until it encounters an obstacle or the ray made by Raycast hits an object. When the Raycast hits an object that is part of the environment it then Raycast in all 7 directions diagonally, side-to-side and front-to-back to check for more obstacles near it as shown in this image.
The path it takes is then determined by whether or not these rays hit another object and go in the direction in which a ray didn't hit anything and that is the most optimal direction.
By most optimal I mean in the order:
fl OR fr
l OR r
bl OR br
b
I need to decide based on this data which direction to turn. If in a case where say fl and fr are both true then I would randomly decide between the two directions.
I want to optimize this process so I don't have to use multiple if-statements. I had thought of using bitmasking techniques since there are 8 directions, if you include forward, and each bit could represent a direction.
Any ideas, constructive criticism, etc is welcome. Thanks for your time.
Do not optimize pre-maturely, it is the root of all evil, as some say. This would be a perfect example.
The reason we code in C# and not in assembly (or even C) is not performance, it's readability. While it may be possible to make your code run a tiny bit faster, the results may not even be measurable. an IF branch here is still pretty tiny, and I would strongly advice against replacing it with anything else - you will loose a lot of readability, while not gaining much, if any at all performance. Bitmasking techniques start to become effective when you deal with thousands of objects (i.e. Entity), but if your if branch fits on a couple of pages, I would not touch it unless I'd find it to be a major drag by measuring it in the profiler. If you are not sure what you are doing its not that hard to make your code run slower. The priority is to make the code readable (definitely in your case as you won't be running it in a tight loop), and easy to modify. In some cases it's better to unroll the code into a more verbose if branch than to pack it into some bizarre loop - remember a loop is also an instruction.
G,day!
I have been reading a lot lately about the new path-finding system in Unity 2018. I was wondering if you had checked it out yet?
If you wanted to create it from scratch, mad props to you - however, I would be following the documentation (on the Unity website) to create an Update script on the enemy GameObject that would find the shortest path to the player GameObject's transform position, making sure to adjust for obstacles.
Have a browse around on YouTube as well - there are heaps of great tutorials.
Otherwise - if you were looking to do it on your own, the way that I would personally do it (which probably won't be the most optimal) will be though at each frame, scanning for potential paths and then finding the shortest one and acting upon it by moving the enemy based on a predetermined speed * time.deltaTime. A great visualization of the system can be found on Devon Crawford's website (link below).
Unity Documentation Link: https://docs.unity3d.com/Manual/Navigation.html
Devon Crawford's Website: http://www.devoncrawford.io/software/pathfinding
Related
I've been in game development for a while and since then this is the question that I couldn't find an answer to.
So, seriously how does this magic works on its very bases? For example, when I'm using Unreal Engine 4, I can simply call LineTraceByChannel, pick a start and end point, and then the engine will trace a line and return me the hit result. I can also visualize it. But my brain thinks like:
"Okay there is no such a thing as shooting a line segment, like a bullet, this is a simpler way to visualize it. There must be huge math behind it. How does it detect the intersecting actors? Does it check for every available geometry or only the ones that are in the radius of endpoint - start point? Then how does it detects that these actors are in that radius? How come it can be not 'that' much expensive?"
I would be really enlightened if someone can explain what the heck is behind the line tracing hit tests... Thank you for your time...
Nice question. To dense to obtain a short answer of the kind "raycasting works just like this" and that's it. The big deal is how the raycasting and the collision point obtained is optimized and all the math and software tricks behind it for it to be that cheap. However, that's too deep and complex stuff to be explained in an answer I believe. That does not mean that I know the answer.
An approximate unoptimized approach can be to check with sphere-line intersection the entities of the scene. The ray is just a line, so two points. Then, once you got the entities interesected, you get the lowest z coordinate one (closest to the camera where the ray is thrown from), and for all the polygons of that entity, you check for the plane-line for all the planes of the 3d entity model. Again the big deal is how this is optimized to be so computationally cheap and be used for example in an Update()
The same question arises with collisions. In 2D it's not that easy but you can build yourself, for example, a simple collision system, to check if two polygons collide, just checking for example for all of the points of one of the polygons if any is inside the other. To optimize that regarding efficiency and/or precision convex hull and bounding volumes (box, sphere, capsule, cylinder, etc) obtention techniques are used. That goes out of control when you go to 3D.
As to explain even the stuff of the bounding volume techniques used for collision detection and optimization is too large and deep to be explained on an answer to explain collisions in game engines, something similar should be happening to explain how the raycast stuff works in detail I believe.
Anyhow I will be also glad to read any comments about how raycast works in-game engines in detail, maybe some explanation at a high level, beyond the simple model example I explained, which I don't know if it even used actually, its just a computational geometry approach to achieve the intersection point.
Hello I am making a 2D game in c#, using Unity2D.
Game is basically tower stacking game where player is given random objects from array which they need to stack on top of each other. Objects spawns at the top of the screen with Body Type Kinematic and player can move it only on the x axis and when player lets go object, its Body Type changes to Dynamic and it starts to fall and lands on start platform or tower.
My problem is that when this new object lands on existing tower or start platform it does not land smoothly, it goes in the other object and sometimes bounces and that often tips over the tower.
And when these objects stay on top of each other, they are vibrating and causing the tower to tip over.
Is there any way of making objects be stable and not go in each other when landing?
Thanks in advance.
What you're describing is a well-known problem and limitation with all physics engines.
It's called stacking stability, and it always becomes a problem, if you have large enough amount of physics objects.
There isn't really a single simple solution, it's a combination of choosing the right physics engine, setting your object's properties correctly, and even putting in some custom code of your own that tries to circumvent some of the issues when possible (for example by disabling physics on elements that are too deep down on the stack, so we consider them "stable", until some point when the situation changes).
I recommend reading (and watching) this, and from there, exploring wherever the links will take you.
And as I said, keep in mind that this is ...a common problem, and sometimes hard to solve. In some complex cases (yours shouldn't be one of them, though), even impossible to solve.
(EDIT two days after the answer has been accepted, I should have written this right away, sorry) :
Oh, and I forgot (sorry about that, but hopefully you've already found out from some of the resources you read) the problem of the objects going into each other can also be influenced by some settings (max depenetration depth i think it's called in unity). Another thing that might help is outright custom code which will raycast from the object that's falling downwards, and when it detects a block at a certain distance (you'll probably want to check for distance about the size that the object can travel in 1 to 10 frames or so), it would momentarily disable the physics on the falling one, position it exactly on top of the block below, zero out the velocity, and then re-enable the physics on it. This would avoid the penetration problems, as well as the instability and vibrations that happen when the new block impacts the tower. If you, however, want the effect of that impact instability to be present in some way, you can then still manually add a physics force on the impacted block(s), and it will have the upside that you can set the size of that force yourself (instead of it being calculated from the block), meaning you'll have much more control about how significant that impact effect is, which could be very useful for game balance, since that effect is quite important for the difficulty of the game, and in general, you'll want to have this level of control over things that affect gameplay difficulty ;)
I have an object build from aprox. 1700 small cube meshes (pretty simple ones). if there are been hit i'm trying to return it to the origin a few seconds after any other objects hits it (the trigger is any other collision).
the result is between very poor performance to complety stuck.
what i tried to do so far :
limit the object to one collision trigger to avoid activation over and over again .
disable the object physics
move the object using both Physics and the transform directly.
what seems to be the problem ? can unity even handle that much objects and collisions ?
Screenshots of profiler will make easier to think a proper solution for this situation i think. I don't think any draw calls or collisions cause this problem. More about Unity3D profiler is here. You can try to handle physics in Fixed update and try different combinations of rigidbody attributes (Interpolate etc.). Goodluck.
The problem is that Physics in Unity is not a native solution. Unity uses PhysX for physics, and but of them are used like a black box so Unity doesn't know how it works under the hood. That's why all physics-based operations are complex and have rather bad performance. 1000 physics-based objects seems to much to handle by Unity3d. And it's not the only limit in Unity, for example if you create about 10 000 gameobjects (no matter which functionality they will have) they can freeze even Unity's editor.
As for a possible solution there's not much you can do:
Look through optimization guides and best practises here, here and here.
Use Unity's profiler to optimize Physics performance (try to remove physics-based "spikes")
Try to develop some system which will disable gameobject's which are not seen on scene in current moment.
Try to simplify the meshes and collider. Or try to make 1 big mesh which will split into smaller parts when you need interaction.
Try to create your own simple mathematically-based physic. I know it will be really complicated but your not universal physical solutions may have much better performance rather than PhysX.
So anyway it will be a difficult struggle.
Using the Unity3D engine. I'm making a multiplayer game for fun, using Unity's standard networking. If servers hold 25-50 players, what map size is recommended? How big can I make a very detailed map before it is too big for effective gameplay? How do I optimize a large map? Any ideas and advice would be great, I just want to learn and could not find anything about this on google :D
*My map is sliced into different parts.
The size of the map itself, in units, doesn't matter for performance at all. Just keep in mind that Unity (as any other game engine) uses floats for geometry, and when the float values get too high or too low, things can get funny.
What matters is the amount of data that your logic, networking and rendering engine have to churn through. These are different things, even logic/networking data, and limits on those greatly depend on architecture of your game.
Lets' talk about networking. There, two parameters are critical as your limits: bandwidth and latency. Bandwidth is how much data can you transfer, and latency is how fast. Ok, this explanation is confusing. Imagine a truck full of HDDs travelling from one city to another: it has gigantic bandwidth, and you can transfer entire data centers this way. But the latency, time for the signal to travel, is a few hours. On the other hand, two different people from these cities can hop on air balloons, look at each other in the night sky and turn their flashlights on and off. This way, they'll be able to exchange just one bit of information, but with the lowest possible latency: you can't get faster than light.
It also depends on how your networking works. RTS games, for example, often use lock-step multiplayer architecture can operate on thousands of units, but will only exchange a limited amount of data between users: their input commands. A first-person shooter, on the other hand, heavily relies on latency (which lock-stepping can damage): 10 ms when you jump and fire a rocket launcher are much more important than when you say your troops to attack. So, the networking logic is organised differently: every player's computer predicts what will happen, but the central server has authority on what actually happened. Of course, what I'm writing right now are just general examples of architectures that can be used; choosing the right way to do the networking is very difficult, but very interesting and creative task.
Now, logic itself. Actually, most of the gameplay logic used in modern games is relatively simple in terms of cpu requirements, unless it's physics or AI. Using physics in a multiplayer game is tricky enough on it's own, because of synchronisation problems (remember floats?); usually, the actual logic that can influence who wins and who loses is pretty simplified: level geometry is completely static, characters move using easy logic without real physical force, and the physics is usually limited just to collision detection. Of course, you see a lot of physical-based visual stuff: ragdolls of killed enemies falling down, rubble from explosion flying up; but these are typically de-synchronised between different computers and can't actually affect the gameplay itself.
And finally, rendering. Here, a lot of different constraints make place. To tell about them all, I would have to describe the whole rendering pipeline of Unity on different devices, and this is clearly out of scope of this question. Thankfully, there's another way! Instead of reasoning about this limit theoretically, just to a practical prototype. Put in different game assets in the scene, run it on target device and look how it performs. Adjust, repeat! These game assets can be completely ugly or irrelevant; however, they have to have the same technical properties as what you're going to use in the real game: number of polygons, sizes of textures, shaders, etc, etc. Let's say, for example, that you want to create a COD-like multiplayer shooter. To come up with your rendering requirements, just put in N environment models with N polygons each, using NxN textures, put in N characters with some skeleton animations with N bones, and also don't forget some fake logic that would emulate CPU-intenstive stuff so your perfomance measuring will be more realistic. Of course, it won't give you a final picture, but it'll be a good way to start, and it's great to do that before you start producing a lot of art assets.
Overall, game perfomance optimisation is a very broad and interesting theme, and it's impossible to give a precise answer to such a question.
You can improved this reducing the clipping plane of your camera to reduce the visible render distance, and too can use LOD improvement making your sliced part with minor details.
Check this link for more detail about LOD:
http://docs.unity3d.com/Manual/class-LODGroup.html
If you need more improvement you can make a script to load terrain in runtime based on a distance arround your player.
First and foremost: Make the gameplay work, optimize it later. Premature optimization is a waste of programmer's time.
Secondly: think of Skyrim and Minecraft. The world is separated into pieces that are loaded in background when you move around. Using that approach (chunking your world into pieces) you can have virtually infinite world size.
I'm working on an overhead shooter and what happens is, over time, as I move in circles around the arena, the enemies will begin to stack on top of each other until they're one giant stack of units. It ends up looking pretty silly.
The AI is pretty simple and basic: Find the player, move towards him, and attack him if he's in range.
What's the best way to push them away from each other so that they don't all end up on the same spot? I think flocking is a bit overkill (and probably too intensive since I'll have 100-200 enemies on the screen at a time).
Ideas?
Thanks!
Here are a few different approaches you could take to solving this problem:
You could define a potential field for each unit that associates a "height" or "badness" to each location on the map. Each unit moves in a way that tries to minimize its potential, perhaps by taking a step in the direction that moves it to the lowest potential that it can in one step. You could define the potential function so that it slopes toward the player, causing all units to try to move to the player, but also be very high around existing units, causing units to avoid bumping into one another. This is a very powerful framework that is exploited all the time in AI; one famous example is its use in the Berkeley Overmind AI for StarCraft, which ended up winning an AI StarCraft competition. If you do adopt this sort of approach, you could probably then tweak the potential function to get the AI to behave in many other interesting ways, and could easily support flocking. I personally think that this is the best approach to take, as it's the most flexible. It also would be a great starting point for more advanced pathfinding models. For a very good and practical introduction to potential fields for AI, check out this website. For a rigorous mathematical introduction to potential fields and their applications, you might want to check out this paper surveying different AI methods using potential fields.
If you define a bounding circle for each enemy, you could just explicitly disallow the units from stacking on top of each other by preventing any two units from being within two radii's distance of one another. Any time two units got too close, you could either stop one of them from moving, or could have them exert forces on one another to spread them apart. When two units bump into each other, you could just pick a random force vector to apply to each unit to try to spread them apart. This is a much hackier and less elegant solution than potential fields, but if you need to get something up and running it's definitely a viable option.
You could choose a set of points around the player that the units try to move toward, then have each unit randomly choose one of those target points to move to. This would cause the units to spread more thinly in a ring (or whatever shape you'd like) around the player, avoiding the huge masses that you've seen so far. Again, this is way less elegant than using potential fields, but it's another quick hack you could experiment with if your goal is to get something working quickly.
Hope this helps!