My door animation plays all fine in Editor, but in playmode, it seems not to render, but the collisions seem to work fine. Here's what happens.
The Animator is set up like this
The Animation the game should show consists of just moving two children objects, for more info. What did I do wrong, and how do I fix it?
It pretty much seems like your objects are marked as static and therefore the meshes get combined into a single static scene mesh.
Many systems in Unity can precompute information about static GameObjects in the Editor. Because the GameObjects do not move, the results of these calculations are still valid at runtime. This means that Unity can save on runtime calculations, and potentially improve performance.
As the name static already suggests: These objects are static and can not be moved by the Animator.
Related
I'm very new to Unity and have a question regarding loading of game objects at runtime. I have only 4 scenes (4 different 'surroundings'). Each of them displays a 3D object - the one that the user selects from the menu. There will be around 60 3D objects (but only one at a time is displayed). How to deal with loading them? I don't want to create 60 × 4 scenes and add them as a prefab to a scene individually, but keep only 4 scenes and programmatically add 3D objects to them to the scene. What is the best practice memory & performance wise? Can they be reused? How to approach this problem if objects are very big? Are there any code samples?
You can have a script on each scene, referencing 60 prefabs (there's an easy trick to it - if have a public List and want to drag multiple objects from the project window, just lock your inspector using a padlock, you can then select multiple prefabs in the project window and drag them all in one move, releasing on the NAME of the list, instead of expanded empty field).
Prefabs are shared within a project so they won't get duplicated between scenes.
You could also have them all on the scene to start with, deactivated, and just activate one selected prefab, this will be slightly easier, but will use more memory, in terms of build size this should make no difference. If the objects are big you are probably better of instantiating and destroying them as you go
I’m far from being an expert but the way I have done it is with instantiate.
Practically you generate from a prefab a gameObject and later (if it isn’t needed anymore) you Destroy(gameObject);
Instantiate(prefab, new Vector3(x, y, z), Quaternion.identity);
I want to simulate the trajectories of planets in a seperate scene to find out where they will be in the future. I drew a quick diagram to demonstrate.
Is there a way to simulate 2 scenes separately, hiding one but showing the other? I tried this which says they don't interact with each other, but when I tried it they still collided.
Yes, there is a way, if you put all "hidden" objects, like planets on a reparate layer in unity
and disable collisions between that and any other layers in the
edit/project setting/physics
This way those objects won't have any effect on the rest of your scene.
To visually hide the objects simply disable the rendering of that layer in your scene camera. And that's it, if hope this was helpful.
I'm trying to restart my game when the player loses, I play a coroutine which makes the player ignore collisions and falling off the platforms. Then when the player click a button, the same scene is reloaded.
SceneManager.LoadScene("GameScene")
But when the scene loads again, the player is still ignoring collisions and falls, it's like the scene is loaded but not the same way when the game is played for the first time.
¿How can I reload the scene properly without closing the aplication and opening it again?
Thank you.
The problem is that you are using Physics2D.IgnoreLayerCollision for this.
This property is global, it affect all scenes, and not related to the specific scene. What SceneManager.LoadScene resets is only property related to the specific scene or the objects in the scene.
You have 2 options here:
Don't use IgnoreLayerCollision. An alternative may be, for example, to disable all colliders on the player. You can use GetComponentsInChildren for example to find all the colliders.
Reset IgnoreLayerCollision manually.
I'm currently working on an Editor Script in Unity that disables specific GameObjects in a multi scene setup to make the Editor more responsive. I'm working with a setup that has up to 4 Scenes loaded at the same time. Since all these Scenes have many GameObjects in them (I'm talking thousands or even tens of thousands of Objects) the Unity Editor is getting pretty unresponsive and sloppy to use when these Scenes are loaded. Playing and Builds are fine btw, its only the Editor that is unusable.
So I'm working on a simple Editor Tool, that disables specific Parent Objects in all loaded Scenes so the Editor is usable again. This Tool works fine, but I want to prevent the changes on these GameObjects from being saved. I intentionally didn't mark the GameObjects as Dirty so Unity doesn't realize when I disable the Objects. Problem is, when I actually work on a Scene and save my work, the disabled GameObjects will be saved aswell, because at this point the Scene will be marked as Dirty. I've already looked into HideFlags but the problem there is, that I can't make Unity only ignore the changes that were made to an Object. HideFlags will only prevent the Object itself from being saved, which is not what I want.
If somebody knows a simple way to make Unity ignore changes on specific GameObjects that would be extremly helpful :)
Alternativly I would be interested in a way that enables me to run custom Code while Unity is saving the Scenes. That way I could make sure, that these GameObjects will be enabled, before Unity saves the Scene.
Thanks in advance
Don't know about preventing dirty objects from saving, doubt it is possible, but to run custom code before Unity saves a scene - easy.
Example editor script (remember it should be placed under Editor/ dir to work as any other editor script)
using UnityEditor;
[InitializeOnLoad]
static class EditorSceneManagerSceneSaved {
static EditorSceneManagerSceneSaved () {
UnityEditor.SceneManagement.EditorSceneManager.sceneSaving += OnSceneSaving;
}
static void OnSceneSaving (UnityEngine.SceneManagement.Scene scene, string path) {
UnityEngine.Debug.LogFormat ("Saving scene '{0}' to {1}", scene.name, path);
/* Do your magic here */
}
}
Credit for the example goes here, more info on the API here
In all the scripts i used so far when changing in the script the walk speed or adding animation clip it didn't effect it at all.
To change the character speed i need to change it in the Third Person Character (Script) > Move Speed Multiplier.
And to change animation or to add animation i need to go to the Animator window and add a new state and in the state to use the HumandoidWalk then set the state as default or in the script to use this with Play like Play("Walk")
Then what all the properties in the scripts why the speed and animation and others never effect it ? (Not talking about Nav Mesh Agent or Character Transform if needed).
For example i have a script that can accept Walk anim and then i select the HumandoidWalk but that will make the character not walking at all. Only if i make the state in the animator window then it will walk.
It's not only one specific script but in others too.
I see in many places users use Animation or _animation with Play("Walk") and i to make the player move and use animation i need to use Animator or _animator.
Then what is the difference in scripts in unity between the Animation and Animator windows ? What should i use to make the character in this case ThirdPersonController to walk with animation and not just move ?
For exmaple when using waypoints i want when running the game the enemies to start walking/patrolling atuaomtic so i make new state in animator waindow with HumanoidWalk and then in the script specific for a enemy i use the Play("Walk").
Complexity and backwards compatibility.
Basically, when Unity was just created as a product, it was a pretty basic game engine, and a lot of systems that were necessary for game development were not very advanced. Then, a need for something with more capabilities arisen, and in a lot of cases, Unity decided to create a completely new system from scratch, and leave the old one as well.
Now we have legacy and new systems for GUI, for animation, for input handling, for particles and probably for something else I'm forgetting right now. However, it doesn't mean that old systems are completely useless: quite often, you want to use a simple and straightforward system without all the bells and whistles.
New animation system allows you to create great characters, but it also takes a lot of time to learn and set up. If you have a simple animated mesh that just needs to do the same animation on loop, I would use the old system; if I had a complicated character with several layers of different behaviours and animations created to blend with one another, I would use the new animator.
By the way, the same holds for UI: while the old system is pretty bad for working on player-looking UI, it's still widely used for quick prototypes and all kinds of debug menus.