I have a third party package that is for mobile VR. it consist of two cameras that giving me the feel of VR Look. I have designe a user interface which i want to show from my both VR cameras but the problem both camera showing single UI. My UI not duplicating for each camera instead its only showing single UI as image depicted. How can i duplicate My UI so that it show from both VR cams
I am able to get my answer from here, slightly outdated but valid post for unity 5.2 also:
Create two different canvases in your scene.
Select the Render Mode for each canvas as Screen Space - Camera.
Then assign the camera for each of those canvases as required.
Then that particular canvas will be drawn using the camera assigned
to it.
It will render UI for both cams like in my mobile VR project.
Related
I am trying to build a VR applications with portals into other worlds. These portals do however not have to be flat, they can be bent. Here https://www.youtube.com/watch?v=EpoKwEMtEWc is video from an early version.
Right now, the portals are 2D planes with camera textures which stretch and move according to the users position. For VR use, and to ease performance issues, I would like to define which area of the screen gets rendered by which camera. Sort of like a viewport in regular Unity, but not rectangular, and for VR with Windows Mixed Reality.
Is this somehow possible? Possibly even without Unity Pro?
I am working on a project using ARcore.
I need a real world screen that is visible on the ARcore camera, formerly using the method of erasing UI and capturing.
But it was so slow that I found Frame.CameraImage.Texture in Arcore API
It worked normally in a Unity Editor environment.
But if you build it on your phone and check it out, 'has no data'.
Texture2D snap = (Texture2D)Frame.CameraImage.Texture;
Perhaps texture is a null value.
What is the reason?
If this doesn't work, is there a way I can only bring images of the real world from the mobile phone environment?
The real world image is required for use in image segmentation and should not include images of augmented objects.
Ive just started developing for the windows mixed reality headset in unity and it seems to be going well; until i build the program.
The point of my game is simple, one player navigates through a maze in VR and another watches the monitor and guides them through.
In the unity editor under the "game" tab, the cameras work as expected. I used RenderTextures to display two cameras (one of the VR view and one an overview of the entire maze) onto a canvas, which was the game view.
However, when i build my game the only thing that appears on the monitor is the VR's perspective.
I have set the target eye for the vr camera to "both" and the main camera to "None (main display) as others have suggested, but no luck.
Is this a small error ive overlooked or is there a larger problem?
Alex.
****Update:** I've created a test scene where I've recreated the usage of a canvas with image and text, primitive game objects and the use of two cameras in addition to the Camera Rig, with target textures set to the same render texture. In this state, it worked, however when I installed and upgraded all materials to via the lightweight render pipeline, the render texture turned pink and would not render anything from the cameras.
Considering this, my next step forward is to remove the lightweight render pipeline via reverting to a previous commit which does not have the lightweight render pipeline.* If you run into the same situation remember if you do not have a previous commit you can revert to, after removing the lightweight render pipeline you will need to create new materials for all your game objects.*
Problem: In one scene within a VR Project project we are using a world space canvas to display interactable UI. When running through the editor we have no issues, however, when we build the project, all UI canvas's become invisible, though with the use of a laser pointer we can still interact with buttons on the canvas.
I've narrowed the cause down to the use of a specific render texture (only one), which is applied to the target texture of two (2) cameras in the scene. The two cameras are used to provide a live feed to a mesh of the view of a device in the scene.
When I set the two cameras (neither are the main camera in the scene) target texture's to null, that is the only way which I can get the Canvas to appear.
When after running a build I always check the output_log.txt file and have not found any errors.
We are using:
Unity 2018.1.3f1,
VRTK 3.3.0a,
Steam VR w/HTC Vive,
Unity's Lightweight Render Pipeline,
Post Process layer
There is only one canvas in the scene, with all UI objects as children of that object. Our Canvas set up:
Note: I've set the VRTK_UI Canvas component to be inactive to check if that was the cause, and it was not.
[
Camera One:
Note: I've tried clicking the "Fix now" under the target texture, with no change or improvement
[
Camera 2:
Note: I've tried clicking the "Fix now" under the target texture, with no change or improvement
[
Mesh we are Rendering to:
[
Render Texture:
[
Main Camera:
[MainCamera]6
The lightweight render pipeline was the issue, removing it allowed for everything to work as expected/needed.
We have faced the same problem with one camera. Disabling the camera and manually call Render() from another script resolve this issue.
I'm looking to make a security camera type feature in a game I want to design. The idea that I have is that there will be a designated rectangle similar to a TV screen in the game and I want to be able to display in that rectangle area what a Camera sees in a specific room.
So to setup a specific scenario, let's say we have Room A and Room B. I want in Room B to be a TV Screen that displays what is currently shown in Room A. I know this must be possible some how using the XNA camera functions, I'm just really unsure how I would output what the camera sees in that area and then show it in the designated sprite rectangle in Room B.
Hopefully this makes sense or is possible :D
TKs,
Shane.
You will want to render your security camera scene to a custom RenderTarget2D, which you can then use as though it were a Texture2D.
The 5 basic steps to this are:
Create a custom RenderTarget2D
Tell your GraphicsDevice to render to this new target
Render your 'screen' scene
Reset the render target
Texture your screen polygon with the texture created by the render target
For more information, see Riemer's XNA tutorial.