Rendering a screen within a game world in Unity - c#

I have a 3d world the player can interact, I'm wanting the player to pick up a gameboy and play a game on the gameboy.
I'm wondering how I can render another 'game' on the gameboy the player can play with while still being in the world, meaning it's a game in a game. I tried doing this with Cameras, but I'm stuck as I don't seem to understand the core concept behind rendering a game there. I know I could render a canvas using UGUI but I don't think I can make a physics based game on UGUI?
Would love some help!

You'll probably need a Render Texture. And the game scene of the gameboy should be the same scene as your main game, it just has to show another location in the world.
Try to implement it, and come back if you can't figure out how to implement something.

Related

Unity 2D: How to put entire game in the safe area?

I'm making a Unity 2D game and I'm looking for help on how to make sure the entire game only displays in the safe area on both iOS and Android.
There's no shortage of tutorials on how to put Canvas elements inside the safe area, but I'd like my entire game to fit in there. Is there an automatic way to do this in Unity? Or perhaps a way to set the camera bounds to the safe area? What's the best way? Thanks.

Bloom is different in Inspector vs Game view Unity

So basically I'm using the lighting system for the 2D render pipeline in Unity, and for whatever reason the post processing looks different in either view.
Pictured Here. The scene view has more glow than the game view, and I would very much like to have the sceneview show up in game, anyone know how to do so?
Edit: I have figured out that my game view is not using scene lighting, as when I turn off scene lighting in my scene view, both cameras are identical.
Edit: So apparently my game view is using lighting, but not the bloom effect.
So I ended up figuring out that if you go to the Render Pipeline asset and go under Quality, checking the HDR checkmark under it does what I'm looking for
Pictured Here

Unity - build camera is different to editor's "game" camera

Ive just started developing for the windows mixed reality headset in unity and it seems to be going well; until i build the program.
The point of my game is simple, one player navigates through a maze in VR and another watches the monitor and guides them through.
In the unity editor under the "game" tab, the cameras work as expected. I used RenderTextures to display two cameras (one of the VR view and one an overview of the entire maze) onto a canvas, which was the game view.
However, when i build my game the only thing that appears on the monitor is the VR's perspective.
I have set the target eye for the vr camera to "both" and the main camera to "None (main display) as others have suggested, but no luck.
Is this a small error ive overlooked or is there a larger problem?
Alex.

How can I make a layover GUI for Android Unity game

I have made an Android game using Unity, and now for the finishing touch I need a nice GUI, but unlike a GUI that just takes place in a different Unity scene, I would like it to just be a layover in the scene where the game is played. Basically you can see the game in the background, but there is a GUI with some buttons like setting and play. Say you click play, then the GUI would go away and the game would begin. Is this something that can be done in Unity? Any help would be greatly appreciated, thanks!
Bonus question - best way to make a good looking GUI, should I design it myself or should I buy from the Unity Asset Store? Pros and Cons of each?
You just need to create the ui in the same scene as your game using the new unity ui system and the canvas object. Than for example on play button click you just aniamte the ui outside of the visible area or remove it from the canvas.

is there a way to make a 3D game in winforms or WPF in C#?

I'm thinking about making a 3D point and click game, is it possible to make one in winforms or WPF? I don't need any physics or anything all I need is to make the application render 3D objects. I know that I can use XNA but if I do then I will have to relearn almost everything again. My third approach would be to make the scenes in a 3D game engine and then print the screen and then load it as a image. Any suggestion would be appreciated.
There's a big difference between a 3D game, and just letting players interact with a rendered image.
Your approach of loading a pre-rendered image is possible to do in both Winforms and WPF. You would just need to capture click events on the image and check the location against your list of active areas. Then just handle what needed to be done, ie: move to the next area, activate item, etc.
Edit from comment:
It's not so much which is friendlier. You can host an XNA viewport in Winforms/WPF. It's more about how you want your game to work. If you never have moving 3D scenes, XNA is overkill, and images will work just fine.
If you want dynamic scenes, you'll need to be able to render them on the fly. Then, XNA makes more sense. It is a lot more work though compared to just displaying images.
If you just want to show pre-rendered 3d images in your game, why not create them using a real 3d graphics tool, such as 3D Studio Max or Maya (or a free one, such as Blender)? It sounds like there's no need for actually rendering the 3d scenes in a game engine at all.

Categories

Resources