I'm trying to come up with a general concept for transitions, without having to include any specific code inside the actual scene. In all the samples I've seen so far the scenes handle that stuff themselves. Like a fade in/out, where the scenes have to adjust their draw method, for the correct transparency. But not only would this be annoying to have in every scene, with multiple kinds of transitions you'd get cluttered code pretty fast.
So I've been thinking. The only way I could come up with to support this, and more complicated transitions, without handling it in the scene itself, are render targets for each scene. The render target would be set from the scene manager before calling the draw method. After draw the render target would get reset to the render target of the manager, and the scene's texture would get drawn with information about the current transition. Without clearing the manager texture more scenes would follow, if more have to be drawn. This way you could do pretty much anything (transition wise), the scenes would be completely independent of each other, and you wouldn't need a single line of transition code in the actual scenes. (For reference, there are various transition types I'd like to be able to do. Not only fades, but also shader effects, moving one scene in while the current one is "pushed away", etc., involving one or multiple scenes.)
Well, that's my theory. Here comes my question: Does this sound like a viable plan? Is this the way to go? I've read about performance problems when switching rendering targets too frequently, and other problems, which is the main reason I'm hesitating to implement this. But so far I couldn't think of or find a better method. Although I think it shouldn't make a difference, for the moment I only care about 2D (mentioned just in case).
That, in general is a sound approach, but beware render target switching costs. Also, if each scene were to be animated at the time of transition, you'd be rendering two scenes at the same time into two different rendertargets and then compositing the results on screen.
Related
[using unity 2020.3]
I'm trying to slowly blend different layers in and out in VR, with both layers being visible while the fade between occurs. Right now, I am using two cameras, one as the main camera and one as a render texture (both only rendering their selective layers). Then I use UI to fade the render texture in and out. This looks and works great in 2D view (including builds), but UI components do not render in VR.
I am aware that rendering this in VR will require 4 sets of rendering (two for each eye), but I'd still like to know how to generate and display a render texture for each eye using unity.
This effect can be done in other ways and I'm open to suggestions. There are a lot of different types of elements I wish to fade in and out so I'm aware of one solution to add transparent shaders and fade particles but this can be tedious and requires a lot of setups (I'd like more of a permanent solution for any project). This being said, I'd still like to know how to manipulate what is being rendered out to the VR headset.
I'm fairly certain that the "Screen space effects" section of Unity doc Single Pass Stereo rendering (Double-Wide rendering) -- { https://docs.unity3d.com/Manual/SinglePassStereoRendering.html } -- is what I'm looking for, however, this still doesn't answer how to get the render texture for each eye (and I'm a little confused on how to use what they have written).
I'm happy to elaborate more and test some things out! Thank you in advance!
I am having a issue where i suppose to load a large number of objects in my scene on an event,
so when ever i start loading these Scenesmy Vive/vr goes to compositor screen and it kinda flickers between my main scene and compositor screen
until my loading of Scenes is finished.
So my question is how to stop this flickering, i am happy to call the compositor screen till my Loading is completed and then it switch back to my Main scene or something like that which can solve this flickering issue
i have been searching around that how to call compositor screen on my own or how to stop it being called when my Loading is in progress but in vein.
any help would be much appreciated because i am out of ideas.
Thanks...
Flickering and showing compositor screen is generally caused by Unity not being able to Render a frame. I'm sure you will be able to find which object(s) (if any in particular) is causing a lag in the Profiler Tab.
I assume by "Loading" objects you mean Instancing them. If so, it depends:
a) If you instancing multiple identical objects - look up into GPU instancing methods
b) if the objects aren't unique, you may try to:
Load scene asynchronously
Place all (deactivated) objects on the first scene and instead of instancing or
loading new scene, just activate them. This will only increase initial loading
time.
Another things that you can use is:
Single Pass Stereo Rendering
use of ONLY or mostly baked lightning - which will SIGNIFICANTLY increase performance (that let me display in real time VR scene with model that counts more than 40 million vertices)
By default, SteamVR fades to grid whenever an app hangs. While of course it's ideal to keep your framerate above the threshold that would normally trigger this, there are many cases (such as loading scenes) where all the optimization in the world won't keep Unity from hanging long enough to trigger this issue.
Fortunately, this behavior can be disabled entirely! SteamVR Settings -> Developer -> Do not fade to grid when app hangs. This setting can be also be changed in the steamvr.vrsettings file which can be found using vrpathreg.exe. There, set steamvr.doNotFadeToGrid to true. This file is modified by the SteamVR Settings interface as well, but accessing it via the file directly allows you to modify settings for your target devices from within a game installer, for example.
Unfortunately as of now there isn't a way to configure this on a per-app basis, meaning the behavior will be turned off for all SteamVR apps on a user's machine. Tweaking this setting comes with the caveat that low framerates make users feel sick and the fade-to-grid feature prevents this from happening. However, in my experience, the feature is a nuisance that detracts from immersion far more than a temporary dip in framerate as long as you're following performance best practices, and if framerate is dropping anyway, this fade to grid feature causes intense flickering that's just as uncomfortable as low framerate.
I am working on a procedurally generated art project. The idea is that after runtime several clay pots/vases are procedurally generated from a lathe/line and rotation script. The pots are located on a circular platform and the camera rotates around this platform.
Pots are generated just in front of the cameras field of vision. Pots are deleted after the pass out of the cameras field of vision. At any given time, there are about 30 vessels.
The script is being changed so that the generation and deletion happens in a large batch while the camera movement is stopped. I would like to force Unity to re-bake GI at this time, but I can't figure out how.
Since none of the objects are moving and there will be a complete pause in all movement (the viewer will not know that the CPU is consumed) I'd like to avoid using realtime GI. Baked GI will allow me to use area lights and more expensive GI settings, but the objects generated after runtime need to be accounted for.
I hope this makes sense. I don't think I am using Unity in a typical fashion, so there is not a lot of documentation about this.
Also, if anyone knows a scripting API to access Lighting -> Object -> Important GI [x] that would be immensely helpful
I was able to use GameObjectUtility to make all of vessels Lightmap Static, but they need to be GI Important (they generate on top of each other, inside each other, etc.. and they have a slightly specular material)
The best solution seems to be a piece-mealed procedurally generated scene. This is briefly mentioned in the introduction to Global Illumination.
You might look into 5.3's scene manager, which allows for making multiple scenes and piecing them together. If you baked in such a scene, I imagine you could combine them together as you wished.
I'm new to Unity and I'm making a car racing Game. Now, I'm stuck at some point. I was looking for some solution of my problem, but couldn't succeed.
My problem is:
When I run my game on my phone, it sticks badly because whenever there are several buildings in front of the car camera, like one building behind another building, it lags. Reason for this is there are so many vertices and edges at that time, So the Car Camera is unable to capture all that stuff at same time.
How do I preload the 2nd Scene while loading 1st Scene?
I am using Unity free version.
In graphics programming, there is a common routine to simply don't draw objects that aren't in the field of view. I'm sure Unity can handle this. Check link: Unity description on this topic
I'm not hugely knowledgeable about Unity, but as a 3D modeller there's a bunch of things you can do to improve performance:
Create a simplified version of your buildings with fewer polygons for use when buildings are a long way away. A skyscraper, for example, can be as simple as a textured box.
If you've done that already, reduce the distance at which the simpler imposters are substituted for the complex versions.
Reduce the number of polygons by other means. A good example is if you've got a window ledge sticking out of the side of a building, don't try and make it an extension of the body. Instead, make it a separate box, delete the facet that won't be seen, and move it to intersect with the rest of the building.
Another good trick is to use bump maps or normal maps to approximate smaller features, rather than trying to model everything.
Opaqueness. Try not to have transparent windows in your buildings. It's computationally cheaper to make them just reflect the skybox or a suitably blurred reflection imposter. Also make sure that the material's shader is in Opaque mode, if it supports this.
You might also benefit a little from checking the 'Static' box on the game object, assuming that buildings aren't able to be moved (i.e. by smashing through them in a bulldozer).
Collision detection can also be a big drain. Be sure to use the simplest possible detection mesh you can - either a box, cylinder, sphere or a combination.
In this game im trying to create, players are going to be able to go in all directions
I added one single image(1024x768 2d texture) as background, or terrain.
Now, when player moves around I want to display some stuff.
For example, lets say a lamp, when player moves enough, he will see lamp. if he goes back, lamp will disappear because it wont be anymore in screen
If Im unclear, think about mario. when you go further, coin-boxes will appear, if you go back they will disappear. but background will always stay same
I thought if I spawn ALL my sprites at screen, but in positions like 1599, 1422 it will be invisible because screen is only 1024x768, and when player moves, I will set place of that sprite to 1599-1,1422-1 and so. Is it a good way to do this ?
Are there better ways?
There are two ways you can achieve this result.
Keep player and camera stationary, move everything else.
Keep everything stationary except the player and the camera.
It sounds like you are trying to implement the first option. This is a fine solution, but it can become complicated quickly as the number of items grows. If you use a tile system, this can become much easier to manage. I recommend you look into using a tile engine of some sort. There are a lot of great tile map editors as well.
Some resources for using Tiles:
Tiled -- Nice Map Editor
TiledLib -- XNA Library for using Tiled Maps
What you're describing there is a Viewport, which describes a portion of the 'world' that is currently visible.
You need to define the contents of your 'world' somehow. This can be done with a data structure such as a scene graph, but for the simple 2D environment you're describing, you could probably store objects in an array. You would need to bind your direction keys to change the coordinates of the viewport (and your character if you want them to stay centered).
It's a good idea to only draw objects that are currently visible. Without knowing which languages or packages you are using it's difficult to comment on that.
I would look into Parallax scrolling. Here is an example of it in action.
If this is what you require, then here is a tutorial with source code.
XNA Parallax Scrolling
After you are finished with basic scrolling, try to implement some frustum culling. That is only draw objects which are actually visible on the screen and avoid unnecessary drawing of stuff that cannot be seen anyway.
I would prefer solution number 2 (move player and camera) - It would be easier for me, but maybe its just personal preference.