I'm new to Unity and I'm making a car racing Game. Now, I'm stuck at some point. I was looking for some solution of my problem, but couldn't succeed.
My problem is:
When I run my game on my phone, it sticks badly because whenever there are several buildings in front of the car camera, like one building behind another building, it lags. Reason for this is there are so many vertices and edges at that time, So the Car Camera is unable to capture all that stuff at same time.
How do I preload the 2nd Scene while loading 1st Scene?
I am using Unity free version.
In graphics programming, there is a common routine to simply don't draw objects that aren't in the field of view. I'm sure Unity can handle this. Check link: Unity description on this topic
I'm not hugely knowledgeable about Unity, but as a 3D modeller there's a bunch of things you can do to improve performance:
Create a simplified version of your buildings with fewer polygons for use when buildings are a long way away. A skyscraper, for example, can be as simple as a textured box.
If you've done that already, reduce the distance at which the simpler imposters are substituted for the complex versions.
Reduce the number of polygons by other means. A good example is if you've got a window ledge sticking out of the side of a building, don't try and make it an extension of the body. Instead, make it a separate box, delete the facet that won't be seen, and move it to intersect with the rest of the building.
Another good trick is to use bump maps or normal maps to approximate smaller features, rather than trying to model everything.
Opaqueness. Try not to have transparent windows in your buildings. It's computationally cheaper to make them just reflect the skybox or a suitably blurred reflection imposter. Also make sure that the material's shader is in Opaque mode, if it supports this.
You might also benefit a little from checking the 'Static' box on the game object, assuming that buildings aren't able to be moved (i.e. by smashing through them in a bulldozer).
Collision detection can also be a big drain. Be sure to use the simplest possible detection mesh you can - either a box, cylinder, sphere or a combination.
Related
I want to simulate the trajectories of planets in a seperate scene to find out where they will be in the future. I drew a quick diagram to demonstrate.
Is there a way to simulate 2 scenes separately, hiding one but showing the other? I tried this which says they don't interact with each other, but when I tried it they still collided.
Yes, there is a way, if you put all "hidden" objects, like planets on a reparate layer in unity
and disable collisions between that and any other layers in the
edit/project setting/physics
This way those objects won't have any effect on the rest of your scene.
To visually hide the objects simply disable the rendering of that layer in your scene camera. And that's it, if hope this was helpful.
I am working on a procedurally generated art project. The idea is that after runtime several clay pots/vases are procedurally generated from a lathe/line and rotation script. The pots are located on a circular platform and the camera rotates around this platform.
Pots are generated just in front of the cameras field of vision. Pots are deleted after the pass out of the cameras field of vision. At any given time, there are about 30 vessels.
The script is being changed so that the generation and deletion happens in a large batch while the camera movement is stopped. I would like to force Unity to re-bake GI at this time, but I can't figure out how.
Since none of the objects are moving and there will be a complete pause in all movement (the viewer will not know that the CPU is consumed) I'd like to avoid using realtime GI. Baked GI will allow me to use area lights and more expensive GI settings, but the objects generated after runtime need to be accounted for.
I hope this makes sense. I don't think I am using Unity in a typical fashion, so there is not a lot of documentation about this.
Also, if anyone knows a scripting API to access Lighting -> Object -> Important GI [x] that would be immensely helpful
I was able to use GameObjectUtility to make all of vessels Lightmap Static, but they need to be GI Important (they generate on top of each other, inside each other, etc.. and they have a slightly specular material)
The best solution seems to be a piece-mealed procedurally generated scene. This is briefly mentioned in the introduction to Global Illumination.
You might look into 5.3's scene manager, which allows for making multiple scenes and piecing them together. If you baked in such a scene, I imagine you could combine them together as you wished.
Here's the deal - I'm working on an algorithm/library that is able to generate a navigation mesh in virtually any environment where I can get coordinates for the controlled agent and/or for other agents within the same static environment. The only input I have, is a collection of points where an agent has been to.
(See the image here to hopefully understand what I mean)
I already got to the point where I can create navmeshes manually and navigate on them well enough. However, in larger environments, having only coordinates of, say, the controlled agent, it's really tedious and time-consuming to manually do it.
The uses for such algorithm/library for me are obvious, but I have put a lot of thought into it already, so I'll list a couple of things I'd like to accomplish:
Robotics (scans environment, only gets distance from self to a point, hence getting coordinates - no need for complicated image/video processing)
AI that is able to navigate an unknown and unseen maze (any shape or size) by exploring it
Recording walked areas and creating AI for games that don't know certain places unless they've been there
Now you hopefully see what kind of solutions I'm looking for.
I have tried a couple of things, but couldn't figure them out. One of the most successful things I've tried is giving a range to each individual point (creating a circle), and then looking for places with overlapping circles - you can most likely move on those areas. The problems with this approach started with triangulation of the areas. The resulting mesh may be a little inaccurate, but it must be able to connect to existing ("discovered") parts of the mesh seamlessly (not everything has to be interconnected somehow, as agents can disappear and reappear, but within reasonable proximity, connect the mesh).
Some more information: I'm working in C#, though solutions in java, C++/C, objective C, pseudocode etc are equally acceptable.
P.S. I'm not interested at all in answers like "just use this library" or "use this other language/environment" etc... I want an algorithm. Thank you in advance.
I can help with 2D path finding. You need to find the red outline. Then you can use a voronoi diagram with the red outline (not the agents points). Remove all edges outside the red outline and the remaining edges can be used to navigate the shape by someone/something. Read about it:http://www.cs.columbia.edu/~pblaer/projects/path_planner/.
I'm trying to come up with a general concept for transitions, without having to include any specific code inside the actual scene. In all the samples I've seen so far the scenes handle that stuff themselves. Like a fade in/out, where the scenes have to adjust their draw method, for the correct transparency. But not only would this be annoying to have in every scene, with multiple kinds of transitions you'd get cluttered code pretty fast.
So I've been thinking. The only way I could come up with to support this, and more complicated transitions, without handling it in the scene itself, are render targets for each scene. The render target would be set from the scene manager before calling the draw method. After draw the render target would get reset to the render target of the manager, and the scene's texture would get drawn with information about the current transition. Without clearing the manager texture more scenes would follow, if more have to be drawn. This way you could do pretty much anything (transition wise), the scenes would be completely independent of each other, and you wouldn't need a single line of transition code in the actual scenes. (For reference, there are various transition types I'd like to be able to do. Not only fades, but also shader effects, moving one scene in while the current one is "pushed away", etc., involving one or multiple scenes.)
Well, that's my theory. Here comes my question: Does this sound like a viable plan? Is this the way to go? I've read about performance problems when switching rendering targets too frequently, and other problems, which is the main reason I'm hesitating to implement this. But so far I couldn't think of or find a better method. Although I think it shouldn't make a difference, for the moment I only care about 2D (mentioned just in case).
That, in general is a sound approach, but beware render target switching costs. Also, if each scene were to be animated at the time of transition, you'd be rendering two scenes at the same time into two different rendertargets and then compositing the results on screen.
In this game im trying to create, players are going to be able to go in all directions
I added one single image(1024x768 2d texture) as background, or terrain.
Now, when player moves around I want to display some stuff.
For example, lets say a lamp, when player moves enough, he will see lamp. if he goes back, lamp will disappear because it wont be anymore in screen
If Im unclear, think about mario. when you go further, coin-boxes will appear, if you go back they will disappear. but background will always stay same
I thought if I spawn ALL my sprites at screen, but in positions like 1599, 1422 it will be invisible because screen is only 1024x768, and when player moves, I will set place of that sprite to 1599-1,1422-1 and so. Is it a good way to do this ?
Are there better ways?
There are two ways you can achieve this result.
Keep player and camera stationary, move everything else.
Keep everything stationary except the player and the camera.
It sounds like you are trying to implement the first option. This is a fine solution, but it can become complicated quickly as the number of items grows. If you use a tile system, this can become much easier to manage. I recommend you look into using a tile engine of some sort. There are a lot of great tile map editors as well.
Some resources for using Tiles:
Tiled -- Nice Map Editor
TiledLib -- XNA Library for using Tiled Maps
What you're describing there is a Viewport, which describes a portion of the 'world' that is currently visible.
You need to define the contents of your 'world' somehow. This can be done with a data structure such as a scene graph, but for the simple 2D environment you're describing, you could probably store objects in an array. You would need to bind your direction keys to change the coordinates of the viewport (and your character if you want them to stay centered).
It's a good idea to only draw objects that are currently visible. Without knowing which languages or packages you are using it's difficult to comment on that.
I would look into Parallax scrolling. Here is an example of it in action.
If this is what you require, then here is a tutorial with source code.
XNA Parallax Scrolling
After you are finished with basic scrolling, try to implement some frustum culling. That is only draw objects which are actually visible on the screen and avoid unnecessary drawing of stuff that cannot be seen anyway.
I would prefer solution number 2 (move player and camera) - It would be easier for me, but maybe its just personal preference.