Unity 3D - forcing GI re-bake after runtime - c#

I am working on a procedurally generated art project. The idea is that after runtime several clay pots/vases are procedurally generated from a lathe/line and rotation script. The pots are located on a circular platform and the camera rotates around this platform.
Pots are generated just in front of the cameras field of vision. Pots are deleted after the pass out of the cameras field of vision. At any given time, there are about 30 vessels.
The script is being changed so that the generation and deletion happens in a large batch while the camera movement is stopped. I would like to force Unity to re-bake GI at this time, but I can't figure out how.
Since none of the objects are moving and there will be a complete pause in all movement (the viewer will not know that the CPU is consumed) I'd like to avoid using realtime GI. Baked GI will allow me to use area lights and more expensive GI settings, but the objects generated after runtime need to be accounted for.
I hope this makes sense. I don't think I am using Unity in a typical fashion, so there is not a lot of documentation about this.
Also, if anyone knows a scripting API to access Lighting -> Object -> Important GI [x] that would be immensely helpful
I was able to use GameObjectUtility to make all of vessels Lightmap Static, but they need to be GI Important (they generate on top of each other, inside each other, etc.. and they have a slightly specular material)

The best solution seems to be a piece-mealed procedurally generated scene. This is briefly mentioned in the introduction to Global Illumination.
You might look into 5.3's scene manager, which allows for making multiple scenes and piecing them together. If you baked in such a scene, I imagine you could combine them together as you wished.

Related

Editing unity scene files post-compile

I'm working on a mod engine for a unity game. It is 2D based and stores all the level data in separate files ("level1", "level2") in the game data folder (I believe this is standard for unity). I am looking to be able to edit these in post to add/remove game objects in the scene. I wish to be able to do this programmatically in C#.
I've already had a look at the file in the hex editor, and it seems like this is possible (I can see basic game object data).
Currently im loading the scene then moving all objects around or instantiating new ones, but this is proving to be unstable because of the way the game handles objects.
If anyone could point me in the direction of how i would go about this, it would be greatly appreciated.
Update for those that are asking for additional info: Yes, by levels I mean scenes, unity saves them as “level0”, “level1”, etc
I am not the author of the game, the game was not designed with changing the scenes in mind, almost all of the interacatble objects have special riggers crafted to them, so in order to move them it requires me to be extremely careful or the game crashes.
From what you write it seems your separate files are actual Unity3D scenes. You should only have once scene and a system in place that reads and loads your data, for example from text files, then programmatically instantiate those object at run time.

Lighting/lightmaps with Unity 5 using Probuilder

I'm getting back into Unity again. I used to use Unity 4 to create my level where I could literally shoot out real time lights out of my gun. I used Probuilder to create my scene as well and had loads of cars and patrolling guards.
And it was smooth.
I tried to create my new game using Unity 5 and everything seems amazing. The light bounces looks fantastic. However I realized when I'm trying to bake my scene, made of 3 rooms and a hallway made with probuilder so far, it seems to take forever to bake and my baked lights doesn't seem to bake onto the Probuilder meshes. I have one realtime directional light, one mixed pointlight and a realtime spotlight.
I'm getting about 300 setpasses, about 120k verts from a small level and about 30-40 fps. Using Occlusion Culling.
I don't understand how Unity 5's lighting works now and its frustrating. All the realtime GI. Can I get some help here?
If you want to use realtime GI (precomputed) all you need to do is use realtime/mixed lights and enable Realtime Global Ilumination on the Ligthing config.
Mixed lights will bake STATIC objects (make sure your probuilder models are set as static).
Check this link -> https://unity3d.com/es/learn/tutorials/topics/graphics/introduction-lighting-and-rendering?playlist=17102

How to Combine Vertices and edges into one In Unity

I'm new to Unity and I'm making a car racing Game. Now, I'm stuck at some point. I was looking for some solution of my problem, but couldn't succeed.
My problem is:
When I run my game on my phone, it sticks badly because whenever there are several buildings in front of the car camera, like one building behind another building, it lags. Reason for this is there are so many vertices and edges at that time, So the Car Camera is unable to capture all that stuff at same time.
How do I preload the 2nd Scene while loading 1st Scene?
I am using Unity free version.
In graphics programming, there is a common routine to simply don't draw objects that aren't in the field of view. I'm sure Unity can handle this. Check link: Unity description on this topic
I'm not hugely knowledgeable about Unity, but as a 3D modeller there's a bunch of things you can do to improve performance:
Create a simplified version of your buildings with fewer polygons for use when buildings are a long way away. A skyscraper, for example, can be as simple as a textured box.
If you've done that already, reduce the distance at which the simpler imposters are substituted for the complex versions.
Reduce the number of polygons by other means. A good example is if you've got a window ledge sticking out of the side of a building, don't try and make it an extension of the body. Instead, make it a separate box, delete the facet that won't be seen, and move it to intersect with the rest of the building.
Another good trick is to use bump maps or normal maps to approximate smaller features, rather than trying to model everything.
Opaqueness. Try not to have transparent windows in your buildings. It's computationally cheaper to make them just reflect the skybox or a suitably blurred reflection imposter. Also make sure that the material's shader is in Opaque mode, if it supports this.
You might also benefit a little from checking the 'Static' box on the game object, assuming that buildings aren't able to be moved (i.e. by smashing through them in a bulldozer).
Collision detection can also be a big drain. Be sure to use the simplest possible detection mesh you can - either a box, cylinder, sphere or a combination.

anyone tried Leap Motion with Unity? we can't grab objects in the Demo Pack

We've setup the leap motion, got it to successfully run in Standard Unity by moving the DLLs around per instructions, and can successfully track hand positions when running the scenes in this demo. But we cannot grab objects in any scene. We have only gotten the Boxing and Flying scenes to work, because those in fact requires no gestures, simply pushing outwards knocks the bag around, or just detects relative positions of hands to cause flight. But the actual grab action we cannot get to execute, in Unity only. The Airspace apps (orientation + freeform) work fine, and the Visualizer works fine.
See this video short video of us trying http://youtu.be/9kTXCEwUhoc The documentation for the Boxing, ATVDriving, and Weapons all just say to grab when colliding, but we've tried many times and cannot get it to execute even once. The rings should turn red like exactly here https://www.youtube.com/watch?v=RA7a6foNlHo&t=1m8s but they never do, always staying blue no matter what we do.
Any idea what's wrong?
Demo Pack Documentation: https://developer.leapmotion.com/documentation/skeletal/csharp/devguide/Unity_Demo_Pack.html
GitHub project: https://github.com/GameMakersUnion/LeapTest (already has DLLs setup for Standard -free- Unity)
This question was answered on the Leap Motion forum; thought I'd copy it here:
Are you trying to grab objects in general or are you trying to get the demo pack to work?
I don't really know why the demo pack doesn't work as I don't know very much about it, but if you're trying to make an app to grab objects you might check out our v2 tracking and skeletal assets.
developer.leapmotion.com //
https://www.assetstore.unity3d.com/en/#!/content/177703
There's a few demos in our gallery that show grabbing, specifically the RagdollThrower example.
Also, next week we'll be updating the skeletal unity assets with more powerful grabbing.
Not sure if this is what you're looking for, but I thought it might be of interest.
(Source: https://community.leapmotion.com/t/anyone-here-tried-the-unity-demo-pack-we-cant-grab-objects-in-it/1415)
What you need is to look for the Physics model for Hand Pinching something like that. but the word pinch is the trick.
You can add the pinching to your hand contoller or sandbox, then the object should have Rigidbody to be grabable.
Please contact if you want to share info about leap and C#

Render target for each scene, for transitions

I'm trying to come up with a general concept for transitions, without having to include any specific code inside the actual scene. In all the samples I've seen so far the scenes handle that stuff themselves. Like a fade in/out, where the scenes have to adjust their draw method, for the correct transparency. But not only would this be annoying to have in every scene, with multiple kinds of transitions you'd get cluttered code pretty fast.
So I've been thinking. The only way I could come up with to support this, and more complicated transitions, without handling it in the scene itself, are render targets for each scene. The render target would be set from the scene manager before calling the draw method. After draw the render target would get reset to the render target of the manager, and the scene's texture would get drawn with information about the current transition. Without clearing the manager texture more scenes would follow, if more have to be drawn. This way you could do pretty much anything (transition wise), the scenes would be completely independent of each other, and you wouldn't need a single line of transition code in the actual scenes. (For reference, there are various transition types I'd like to be able to do. Not only fades, but also shader effects, moving one scene in while the current one is "pushed away", etc., involving one or multiple scenes.)
Well, that's my theory. Here comes my question: Does this sound like a viable plan? Is this the way to go? I've read about performance problems when switching rendering targets too frequently, and other problems, which is the main reason I'm hesitating to implement this. But so far I couldn't think of or find a better method. Although I think it shouldn't make a difference, for the moment I only care about 2D (mentioned just in case).
That, in general is a sound approach, but beware render target switching costs. Also, if each scene were to be animated at the time of transition, you'd be rendering two scenes at the same time into two different rendertargets and then compositing the results on screen.

Categories

Resources