Lighting/lightmaps with Unity 5 using Probuilder - c#

I'm getting back into Unity again. I used to use Unity 4 to create my level where I could literally shoot out real time lights out of my gun. I used Probuilder to create my scene as well and had loads of cars and patrolling guards.
And it was smooth.
I tried to create my new game using Unity 5 and everything seems amazing. The light bounces looks fantastic. However I realized when I'm trying to bake my scene, made of 3 rooms and a hallway made with probuilder so far, it seems to take forever to bake and my baked lights doesn't seem to bake onto the Probuilder meshes. I have one realtime directional light, one mixed pointlight and a realtime spotlight.
I'm getting about 300 setpasses, about 120k verts from a small level and about 30-40 fps. Using Occlusion Culling.
I don't understand how Unity 5's lighting works now and its frustrating. All the realtime GI. Can I get some help here?

If you want to use realtime GI (precomputed) all you need to do is use realtime/mixed lights and enable Realtime Global Ilumination on the Ligthing config.
Mixed lights will bake STATIC objects (make sure your probuilder models are set as static).
Check this link -> https://unity3d.com/es/learn/tutorials/topics/graphics/introduction-lighting-and-rendering?playlist=17102

Related

Meta 2 with SteamVR; Cannot move around in Unity (nearly) frozen

I have a problem with the project I am working on in Unity and have been trying to solve the problem for days.
I am using the Meta 2 and the HTC Vive base station. I am continuing the work of a colleague. He was working on another computer. I am trying to recreate the project he created. He gave me the project and everything included but also doesn´t know how to solve my problem.
It looks like this:
I can open and play the project. When I play the project I can see everything I want to see in 3D with the Meta 2. But I can not look around. When I move the Meta I see the some kind of "vibrating" movement in the screen but in the end everything stays in the position.
I also connected the Sense Gloves I have with my computer. I can move the fingers of the sense gloves in the screen (they are shown as blue circles which i can move) BUT I cannot move the whole hand. It stays where it is initialized in the room.
I think it is a problem with the transfer of the vive tracking data to unity (the position of the tracker coordinate system in unity is not moving even though hit should).
I hope this explanation is accurate enough. If you need more information, do not hesitate to ask me, please.
Kind regards
Alex

What is the difference in scripts in unity between the Animation and Animator?

In all the scripts i used so far when changing in the script the walk speed or adding animation clip it didn't effect it at all.
To change the character speed i need to change it in the Third Person Character (Script) > Move Speed Multiplier.
And to change animation or to add animation i need to go to the Animator window and add a new state and in the state to use the HumandoidWalk then set the state as default or in the script to use this with Play like Play("Walk")
Then what all the properties in the scripts why the speed and animation and others never effect it ? (Not talking about Nav Mesh Agent or Character Transform if needed).
For example i have a script that can accept Walk anim and then i select the HumandoidWalk but that will make the character not walking at all. Only if i make the state in the animator window then it will walk.
It's not only one specific script but in others too.
I see in many places users use Animation or _animation with Play("Walk") and i to make the player move and use animation i need to use Animator or _animator.
Then what is the difference in scripts in unity between the Animation and Animator windows ? What should i use to make the character in this case ThirdPersonController to walk with animation and not just move ?
For exmaple when using waypoints i want when running the game the enemies to start walking/patrolling atuaomtic so i make new state in animator waindow with HumanoidWalk and then in the script specific for a enemy i use the Play("Walk").
Complexity and backwards compatibility.
Basically, when Unity was just created as a product, it was a pretty basic game engine, and a lot of systems that were necessary for game development were not very advanced. Then, a need for something with more capabilities arisen, and in a lot of cases, Unity decided to create a completely new system from scratch, and leave the old one as well.
Now we have legacy and new systems for GUI, for animation, for input handling, for particles and probably for something else I'm forgetting right now. However, it doesn't mean that old systems are completely useless: quite often, you want to use a simple and straightforward system without all the bells and whistles.
New animation system allows you to create great characters, but it also takes a lot of time to learn and set up. If you have a simple animated mesh that just needs to do the same animation on loop, I would use the old system; if I had a complicated character with several layers of different behaviours and animations created to blend with one another, I would use the new animator.
By the way, the same holds for UI: while the old system is pretty bad for working on player-looking UI, it's still widely used for quick prototypes and all kinds of debug menus.

Unity 3D - forcing GI re-bake after runtime

I am working on a procedurally generated art project. The idea is that after runtime several clay pots/vases are procedurally generated from a lathe/line and rotation script. The pots are located on a circular platform and the camera rotates around this platform.
Pots are generated just in front of the cameras field of vision. Pots are deleted after the pass out of the cameras field of vision. At any given time, there are about 30 vessels.
The script is being changed so that the generation and deletion happens in a large batch while the camera movement is stopped. I would like to force Unity to re-bake GI at this time, but I can't figure out how.
Since none of the objects are moving and there will be a complete pause in all movement (the viewer will not know that the CPU is consumed) I'd like to avoid using realtime GI. Baked GI will allow me to use area lights and more expensive GI settings, but the objects generated after runtime need to be accounted for.
I hope this makes sense. I don't think I am using Unity in a typical fashion, so there is not a lot of documentation about this.
Also, if anyone knows a scripting API to access Lighting -> Object -> Important GI [x] that would be immensely helpful
I was able to use GameObjectUtility to make all of vessels Lightmap Static, but they need to be GI Important (they generate on top of each other, inside each other, etc.. and they have a slightly specular material)
The best solution seems to be a piece-mealed procedurally generated scene. This is briefly mentioned in the introduction to Global Illumination.
You might look into 5.3's scene manager, which allows for making multiple scenes and piecing them together. If you baked in such a scene, I imagine you could combine them together as you wished.

How to Combine Vertices and edges into one In Unity

I'm new to Unity and I'm making a car racing Game. Now, I'm stuck at some point. I was looking for some solution of my problem, but couldn't succeed.
My problem is:
When I run my game on my phone, it sticks badly because whenever there are several buildings in front of the car camera, like one building behind another building, it lags. Reason for this is there are so many vertices and edges at that time, So the Car Camera is unable to capture all that stuff at same time.
How do I preload the 2nd Scene while loading 1st Scene?
I am using Unity free version.
In graphics programming, there is a common routine to simply don't draw objects that aren't in the field of view. I'm sure Unity can handle this. Check link: Unity description on this topic
I'm not hugely knowledgeable about Unity, but as a 3D modeller there's a bunch of things you can do to improve performance:
Create a simplified version of your buildings with fewer polygons for use when buildings are a long way away. A skyscraper, for example, can be as simple as a textured box.
If you've done that already, reduce the distance at which the simpler imposters are substituted for the complex versions.
Reduce the number of polygons by other means. A good example is if you've got a window ledge sticking out of the side of a building, don't try and make it an extension of the body. Instead, make it a separate box, delete the facet that won't be seen, and move it to intersect with the rest of the building.
Another good trick is to use bump maps or normal maps to approximate smaller features, rather than trying to model everything.
Opaqueness. Try not to have transparent windows in your buildings. It's computationally cheaper to make them just reflect the skybox or a suitably blurred reflection imposter. Also make sure that the material's shader is in Opaque mode, if it supports this.
You might also benefit a little from checking the 'Static' box on the game object, assuming that buildings aren't able to be moved (i.e. by smashing through them in a bulldozer).
Collision detection can also be a big drain. Be sure to use the simplest possible detection mesh you can - either a box, cylinder, sphere or a combination.

Unity3D Changing Ball Direction When Hitting a Cube

I'm working on a simple Paddle Game project nowadays. In the begining, everything's looking good. But, when I complete my level design and publish my game, I see gameobjects in my scene are moving so slow. I think, I pass over the Unity3D's physics limit. If I try some Math instead of Unity3D's Colider, I can finish my first project. (I tried to use Separating Axis Theorem, but I can't handle x and y coordinates on Unity3D.)
I need your help. Thanks a lot for your time. (And If I can handle this problem, I will share my project on the internet, for beginner people like me.)
In my project, I achieved this simulation with using BoxColider, but because of Unity3D physics limit, I don't want to use colider in my project.
You can't not detect collision. I'd recommend using Unity's colliders for that, since they're really not bad. The first thing I notice is that in your first simulation, you have 4 box colliders, when instead you should just have one. Use the OnCollisionEnter event (on the spheres) to reflect them off your box.

Categories

Resources