XNA 4.0 Security Camera Feature - c#

I'm looking to make a security camera type feature in a game I want to design. The idea that I have is that there will be a designated rectangle similar to a TV screen in the game and I want to be able to display in that rectangle area what a Camera sees in a specific room.
So to setup a specific scenario, let's say we have Room A and Room B. I want in Room B to be a TV Screen that displays what is currently shown in Room A. I know this must be possible some how using the XNA camera functions, I'm just really unsure how I would output what the camera sees in that area and then show it in the designated sprite rectangle in Room B.
Hopefully this makes sense or is possible :D
TKs,
Shane.

You will want to render your security camera scene to a custom RenderTarget2D, which you can then use as though it were a Texture2D.
The 5 basic steps to this are:
Create a custom RenderTarget2D
Tell your GraphicsDevice to render to this new target
Render your 'screen' scene
Reset the render target
Texture your screen polygon with the texture created by the render target
For more information, see Riemer's XNA tutorial.

Related

How to set up render textures in VR for Unity

I am trying to build a VR applications with portals into other worlds. These portals do however not have to be flat, they can be bent. Here https://www.youtube.com/watch?v=EpoKwEMtEWc is video from an early version.
Right now, the portals are 2D planes with camera textures which stretch and move according to the users position. For VR use, and to ease performance issues, I would like to define which area of the screen gets rendered by which camera. Sort of like a viewport in regular Unity, but not rectangular, and for VR with Windows Mixed Reality.
Is this somehow possible? Possibly even without Unity Pro?

How to present a second screen of the same scene in C#

I have a Windows application that uses directX and open a Windows.Form to shows vrml objects in 3D space.
I want to open a second screen that will show
the exact same scene but from different angle.
(Like Rear-view mirror in car racing games)
I've looked into MDI but I just can't get my head around it.
Any instruction will be appreciated.
Thanks!

Rendering a screen within a game world in Unity

I have a 3d world the player can interact, I'm wanting the player to pick up a gameboy and play a game on the gameboy.
I'm wondering how I can render another 'game' on the gameboy the player can play with while still being in the world, meaning it's a game in a game. I tried doing this with Cameras, but I'm stuck as I don't seem to understand the core concept behind rendering a game there. I know I could render a canvas using UGUI but I don't think I can make a physics based game on UGUI?
Would love some help!
You'll probably need a Render Texture. And the game scene of the gameboy should be the same scene as your main game, it just has to show another location in the world.
Try to implement it, and come back if you can't figure out how to implement something.

Duplicate UI for mobile VR Cam

I have a third party package that is for mobile VR. it consist of two cameras that giving me the feel of VR Look. I have designe a user interface which i want to show from my both VR cameras but the problem both camera showing single UI. My UI not duplicating for each camera instead its only showing single UI as image depicted. How can i duplicate My UI so that it show from both VR cams
I am able to get my answer from here, slightly outdated but valid post for unity 5.2 also:
Create two different canvases in your scene.
Select the Render Mode for each canvas as Screen Space - Camera.
Then assign the camera for each of those canvases as required.
Then that particular canvas will be drawn using the camera assigned
to it.
It will render UI for both cams like in my mobile VR project.

is there a way to make a 3D game in winforms or WPF in C#?

I'm thinking about making a 3D point and click game, is it possible to make one in winforms or WPF? I don't need any physics or anything all I need is to make the application render 3D objects. I know that I can use XNA but if I do then I will have to relearn almost everything again. My third approach would be to make the scenes in a 3D game engine and then print the screen and then load it as a image. Any suggestion would be appreciated.
There's a big difference between a 3D game, and just letting players interact with a rendered image.
Your approach of loading a pre-rendered image is possible to do in both Winforms and WPF. You would just need to capture click events on the image and check the location against your list of active areas. Then just handle what needed to be done, ie: move to the next area, activate item, etc.
Edit from comment:
It's not so much which is friendlier. You can host an XNA viewport in Winforms/WPF. It's more about how you want your game to work. If you never have moving 3D scenes, XNA is overkill, and images will work just fine.
If you want dynamic scenes, you'll need to be able to render them on the fly. Then, XNA makes more sense. It is a lot more work though compared to just displaying images.
If you just want to show pre-rendered 3d images in your game, why not create them using a real 3d graphics tool, such as 3D Studio Max or Maya (or a free one, such as Blender)? It sounds like there's no need for actually rendering the 3d scenes in a game engine at all.

Categories

Resources