Capturing a snapshot of VideoBackground within Unity Vuforia plugin - c#

I would like to capture the realworld view at the moment when content is placed using a GroundPlane or Mid Air stage.
This seems to be readily available within the AR camera's BackgroundPlane Mesh Renderer - Custom/VideoBackground (see screenshot below). However, when I try to access this texture and encode to JPG, the output image is black.
Here is the code I am testing with:
MeshRenderer backgroundMesh=GameObject.Find("BackgroundPlane").GetComponent<MeshRenderer>();
Texture2D texture=(Texture2D)backgroundMesh.material.mainTexture;
byte[] bytes = texture.EncodeToJPG();
var dirPath = Application.dataPath + "/../SavedImages/";
if(!Directory.Exists(dirPath)) {
Directory.CreateDirectory(dirPath);
}
File.WriteAllBytes(dirPath + "Image" + ".jpg", bytes);
Here is a screenshot of the vuforia settings for Video Background:

You can use Vuforia image class for capturing the real world only.
The scripts are tested on mobile, and used in FMETP STREAM.
For your case, you can convert texture2d as jpg.
using UnityEngine;
using System.Collections;
using Vuforia;
using UnityEngine.UI;
public class VuforiaCamAccess : MonoBehaviour
{
private bool mAccessCameraImage = true;
public RawImage rawImage;
public GameObject Mesh;
private Texture2D texture;
#if UNITY_EDITOR
private Vuforia.PIXEL_FORMAT mPixelFormat = Vuforia.PIXEL_FORMAT.GRAYSCALE;
#else
private Vuforia.PIXEL_FORMAT mPixelFormat = Vuforia.PIXEL_FORMAT.RGB888;
#endif
private bool mFormatRegistered = false;
void Start()
{
#if UNITY_EDITOR
texture = new Texture2D(Screen.width, Screen.height, TextureFormat.R8, false);
#else
texture = new Texture2D(Screen.width, Screen.height, TextureFormat.RGB24, false);
#endif
// Register Vuforia life-cycle callbacks:
Vuforia.VuforiaARController.Instance.RegisterVuforiaStartedCallback(OnVuforiaStarted);
Vuforia.VuforiaARController.Instance.RegisterOnPauseCallback(OnPause);
Vuforia.VuforiaARController.Instance.RegisterTrackablesUpdatedCallback(OnTrackablesUpdated);
}
private void OnVuforiaStarted()
{
// Try register camera image format
if (CameraDevice.Instance.SetFrameFormat(mPixelFormat, true))
{
Debug.Log("Successfully registered pixel format " + mPixelFormat.ToString());
mFormatRegistered = true;
}
else
{
Debug.LogError("Failed to register pixel format " + mPixelFormat.ToString() +
"\n the format may be unsupported by your device;" +
"\n consider using a different pixel format.");
mFormatRegistered = false;
}
}
private void OnPause(bool paused)
{
if (paused)
{
Debug.Log("App was paused");
UnregisterFormat();
}
else
{
Debug.Log("App was resumed");
RegisterFormat();
}
}
private void OnTrackablesUpdated()
{
//skip if still loading image to texture2d
if (LoadingTexture) return;
if (mFormatRegistered)
{
if (mAccessCameraImage)
{
Vuforia.Image image = CameraDevice.Instance.GetCameraImage(mPixelFormat);
//if (image != null && image.IsValid())
if (image != null)
{
byte[] pixels = image.Pixels;
int width = image.Width;
int height = image.Height;
StartCoroutine(SetTexture(pixels, width, height));
}
}
}
}
bool LoadingTexture = false;
IEnumerator SetTexture(byte[] pixels, int width, int height)
{
if (!LoadingTexture)
{
LoadingTexture = true;
if (pixels != null && pixels.Length > 0)
{
if (texture.width != width || texture.height != height)
{
#if UNITY_EDITOR
texture = new Texture2D(width, height, TextureFormat.R8, false);
#else
texture = new Texture2D(width, height, TextureFormat.RGB24, false);
#endif
}
texture.LoadRawTextureData(pixels);
texture.Apply();
if (rawImage != null)
{
rawImage.texture = texture;
rawImage.material.mainTexture = texture;
}
if (Mesh != null) Mesh.GetComponent<Renderer>().material.mainTexture = texture;
}
yield return null;
LoadingTexture = false;
}
}
private void UnregisterFormat()
{
Debug.Log("Unregistering camera pixel format " + mPixelFormat.ToString());
CameraDevice.Instance.SetFrameFormat(mPixelFormat, false);
mFormatRegistered = false;
}
private void RegisterFormat()
{
if (CameraDevice.Instance.SetFrameFormat(mPixelFormat, true))
{
Debug.Log("Successfully registered camera pixel format " + mPixelFormat.ToString());
mFormatRegistered = true;
}
else
{
Debug.LogError("Failed to register camera pixel format " + mPixelFormat.ToString());
mFormatRegistered = false;
}
}
}

I was able to resolve this issue by working with the Vuforia ARCamera game object directly, rather than the BackgroundPlane Mesh Renderer. The ARCamera does not have a targetTexture set; as it outputs directly to the screen. However, I am able to set a temporary targetTexture to which to output a frame and then remove it (the targetTexture) immediately after processing, so that the AR mode may continue.
There is also a further solution. This is to use the TextureBufferCamera, which is created by Vuforia at runtime. This already outputs to a targetTexture. But, it is a fixed resolution and therefore the ARCamera is better for my specific requirement.

Related

Rotate Texture2D from XRCpuImage and transfer it to RawImage - Crashes after approx. 30 seconds

I get the last image via XRCpuImage convert it to a Texture2D and transfer it to a RawImage.
The Result looks like this: XRCpuImage on RawImage - no rotation
As you can see, the transferred texture is rotated, so I use a function called "rotateTexture" to rotate the texture to the correct position:Rotated Texture
The rotation looks good so far, but when I run the application it crashes after about 30 seconds.
Could someone tell me what could be wrong with the code below:
using System;
using Unity.Collections.LowLevel.Unsafe;
using UnityEngine;
using UnityEngine.UI;
using UnityEngine.XR.ARFoundation;
using UnityEngine.XR.ARSubsystems;
using Unity.Collections;
public class CPU_RawImage : MonoBehaviour
{
Texture2D m_CameraTexture;
GameObject DR_layer;
[SerializeField]
[Tooltip("The ARCameraManager which will produce frame events.")]
ARCameraManager m_CameraManager; //Anlegen m_CameraManger
/// <summary>
/// Get or set the <c>ARCameraManager</c>.
/// </summary>
public ARCameraManager cameraManager
{
get => m_CameraManager;
set => m_CameraManager = value; //Besetzen mit ARCamera
}
[SerializeField]
RawImage m_RawCameraImage;
/// <summary>
/// The UI RawImage used to display the image on screen.
/// </summary>
public RawImage rawCameraImage
{
get => m_RawCameraImage;
set => m_RawCameraImage = value;
}
XRCpuImage.Transformation m_Transformation = XRCpuImage.Transformation.MirrorX;
void OnEnable()
{
if (m_CameraManager != null)
{
m_CameraManager.frameReceived += OnCameraFrameReceived;
}
}
void OnDisable()
{
if (m_CameraManager != null)
{
m_CameraManager.frameReceived -= OnCameraFrameReceived;
}
}
//Rotate Funktion
private Texture2D rotateTexture(Texture2D originalTexture, bool clockwise)
{
Color32[] original = originalTexture.GetPixels32();
Color32[] rotated = new Color32[original.Length];
int w = originalTexture.width;
int h = originalTexture.height;
int iRotated, iOriginal;
for (int j = 0; j < h; ++j)
{
for (int i = 0; i < w; ++i)
{
iRotated = (i + 1) * h - j - 1;
iOriginal = clockwise ? original.Length - 1 - (j * w + i) : j * w + i;
rotated[iRotated] = original[iOriginal];
}
}
Texture2D rotatedTexture = new Texture2D(h, w);
rotatedTexture.SetPixels32(rotated);
rotatedTexture.Apply();
return rotatedTexture;
}
unsafe void UpdateCameraImage(int[] pixel)
{
// Attempt to get the latest camera image. If this method succeeds,
// it acquires a native resource that must be disposed (see below).
if (!cameraManager.TryAcquireLatestCpuImage(out XRCpuImage image))
{
return;
}
// Once we have a valid XRCpuImage, we can access the individual image "planes"
// (the separate channels in the image). XRCpuImage.GetPlane provides
// low-overhead access to this data. This could then be passed to a
// computer vision algorithm. Here, we will convert the camera image
// to an RGBA texture and draw it on the screen.
// Choose an RGBA format.
// See XRCpuImage.FormatSupported for a complete list of supported formats.
var format = TextureFormat.RGBA32;
if (m_CameraTexture == null || m_CameraTexture.width != image.width || m_CameraTexture.height != image.height)
{
m_CameraTexture = new Texture2D(image.width, image.height, format, false);
}
// Convert the image to format, flipping the image across the Y axis.
// We can also get a sub rectangle, but we'll get the full image here.
var conversionParams = new XRCpuImage.ConversionParams(image, format, m_Transformation);
// Texture2D allows us write directly to the raw texture data
// This allows us to do the conversion in-place without making any copies.
var rawTextureData = m_CameraTexture.GetRawTextureData<byte>();
try
{
image.Convert(conversionParams, new IntPtr(rawTextureData.GetUnsafePtr()), rawTextureData.Length);
}
finally
{
// We must dispose of the XRCpuImage after we're finished
// with it to avoid leaking native resources.
image.Dispose();
}
// Apply the updated texture data to our texture
m_CameraTexture.Apply();
//rotate Texture clockwise
//m_CameraTexture = rotateTexture(m_CameraTexture, true);
// Set the RawImage's texture so we can visualize it.
m_RawCameraImage.texture = m_CameraTexture;
}
void Start(){
DR_layer = GameObject.Find("RawImage");
}
void OnCameraFrameReceived(ARCameraFrameEventArgs eventArgs)
{
int[] pixel = {200,1000};
UpdateCameraImage(pixel);
}
}

Unity Editor Script: How to render a scene in the inspector GUI?

I'm writing a unity editor script which draws a preview scene in the inspector GUI. Basically, I instantiate a prefab with a camera component and move it into a temporary scene. Then I try to draw the scene onto a texture using that camera. My current approach doesn't seem to be working, or maybe there's something wrong in my code. I'd appreciate any help.
Below is some of my code that does the drawing:
[CustomEditor(typeof(NPCSpawnConfig))]
public class NPCSpawnEditor : Editor
{
enum SupportedAspects
{
Aspect4by3 = 1,
Aspect5by4 = 2,
Aspect16by10 = 3,
Aspect16by9 = 4
};
Camera _cam = null;
RenderTexture _rt;
Texture2D _tex2d;
Scene _scene;
// preview variables
SupportedAspects _aspectChoiceIdx = SupportedAspects.Aspect16by10;
float _curAspect;
// world space (orthographicSize)
float _worldScreenHeight = 5;
int _renderTextureHeight = 1080;
float ToFloat(SupportedAspects aspects)
{
switch(aspects)
{
case SupportedAspects.Aspect16by10:
return 16 / 10f;
case SupportedAspects.Aspect16by9:
return 16 / 9f;
case SupportedAspects.Aspect4by3:
return 4 / 3f;
case SupportedAspects.Aspect5by4:
return 5 / 4f;
default:
throw new ArgumentException();
}
}
void DrawRefScene()
{
_rt = new RenderTexture(Mathf.RoundToInt(_curAspect * _renderTextureHeight), _renderTextureHeight, 16);
_cam.targetTexture = _rt;
_cam.Render();
_tex2d = new Texture2D(_rt.width, _rt.height, TextureFormat.RGBA32, false);
_tex2d.Apply(false);
Graphics.CopyTexture(_rt, _tex2d);
}
Vector2 GetGUIPreviewSize()
{
Vector2 camSizeWorld = new Vector2(_worldScreenHeight * _curAspect, _worldScreenHeight);
float scaleFactor = EditorGUIUtility.currentViewWidth / camSizeWorld.x;
return new Vector2(EditorGUIUtility.currentViewWidth, scaleFactor * camSizeWorld.y);
}
#region Init
void OnEnable()
{
void OpenSceneDelay()
{
EditorApplication.delayCall -= OpenSceneDelay;
DrawRefScene();
}
_aspectChoiceIdx = SupportedAspects.Aspect16by10;
_scene = EditorSceneManager.NewPreviewScene();
PrefabUtility.LoadPrefabContentsIntoPreviewScene("Assets/Prefabs/Demo/DemoBkg.prefab", _scene);
_cam = _scene.GetRootGameObjects()[0].GetComponentInChildren<Camera>();
_curAspect = ToFloat(_aspectChoiceIdx);
_cam.aspect = _curAspect;
_cam.orthographicSize = _worldScreenHeight;
EditorApplication.delayCall += OpenSceneDelay;
}
void OnDisable()
{
EditorSceneManager.ClosePreviewScene(_scene);
}
#endregion
void OnCamSettingChange()
{
_curAspect = ToFloat(_aspectChoiceIdx);
_cam.aspect = _curAspect;
_cam.orthographicSize = _worldScreenHeight;
DrawRefScene();
}
// GUI states
class GUIControlStates
{
public bool foldout = false;
};
GUIControlStates _guiStates = new GUIControlStates();
public override void OnInspectorGUI()
{
// draw serializedObject fields
// ....
// display options
using (var scope = new EditorGUI.ChangeCheckScope())
{
_aspectChoiceIdx = (SupportedAspects)EditorGUILayout.EnumPopup("label", (Enum)_aspectChoiceIdx);
if (scope.changed)
{
OnCamSettingChange();
}
}
_guiStates.foldout = EditorGUILayout.Foldout(_guiStates.foldout, "label", true);
if(_guiStates.foldout)
{
using (var scope = new EditorGUI.ChangeCheckScope())
{
_worldScreenHeight = EditorGUILayout.FloatField("label", _worldScreenHeight);
_renderTextureHeight = EditorGUILayout.IntField("label", _renderTextureHeight);
if (scope.changed)
{
OnCamSettingChange();
}
}
}
if (_tex2d != null)
{
Vector2 sz = GetGUIPreviewSize();
Rect r = EditorGUILayout.GetControlRect(false,
GUILayout.Height(sz.y),
GUILayout.ExpandHeight(false));
EditorGUI.DrawPreviewTexture(r, _tex2d);
}
}
}
Here is the result: (only clear color is displayed, but the prefab contains a lot of sprites that should be drawn. The camera is also correctly positioned relative to the sprites.)
Solved this by adding the following 2 lines after getting the camera component.
_cam.cameraType = CameraType.Preview;
_cam.scene = _scene;

Unity C# - Is there a faster way to load a number of image files from disk than UnityWebRequest, WWW or File.ReadAllBytes?

Its a VN style game with user generated content and I need to load the image without delay.
Due to it being user generated content the images will be siting in a folder with the game.
Due to the same reason I cant preload the images, since I cant know which will be the next image to load.
I have tried UnityWebRequest, WWW or File.ReadAllBytes and all have more delay than I expected, even thou Im running it on a SSD.
Is there a faster way?
The code im using for testing the loading time of the images
using UnityEngine;
using System.IO;
using UnityEngine.Networking;
using System.Collections;
using UnityEngine.UI;
using System.Threading.Tasks;
/// <summary>
/// 2020/19/05 -- Unity 2019.3.3f1 -- C#
/// </summary>
public class itemCreatorImageLoad : MonoBehaviour
{
public Image image; // this is referencing to a UI Panel
private Texture2D texture2D;
private UnityWebRequest uwr;
public RawImage rawImage; // this is referencing to a UI rawImage
// path = #"C:\UnityTests\Referencing\Referencing\Assets\StreamingAssets\Items\image.png"
// the # handles the / and \ conventions that seem to come from the program using paths for the web
// C:/.../.../... web
// C:\...\...\... pc
public void LoadImageWWW(string path)
{
if (texture2D)
{
Destroy(texture2D); // this follows the reference and destroys the texture. Else it would just get a new one and the old textures start piling up in your memory, without you being able to remove them.
}
texture2D = new Texture2D(1, 1);
texture2D = new WWW(path).textureNonReadable as Texture2D;
image.sprite = Sprite.Create(texture2D, new Rect(0.0f, 0.0f, texture2D.width, texture2D.height), new Vector2(0.5f, 0.5f), 100.0f);
image.preserveAspect = true;
}
public void LoadImageWWWv2(string path)
{
// "http://url/image.jpg"
StartCoroutine(setImage(path));
}
IEnumerator setImage(string url) // this comes from https://stackoverflow.com/questions/31765518/how-to-load-an-image-from-url-with-unity
{
Texture2D texture = image.canvasRenderer.GetMaterial().mainTexture as Texture2D;
WWW www = new WWW(url);
yield return www;
// calling this function with StartCoroutine solves the problem
Debug.Log("Why on earh is this never called?");
www.LoadImageIntoTexture(texture);
www.Dispose();
www = null;
}
public void LoadImageReadAllBytes(string path)
{
byte[] pngBytes = File.ReadAllBytes(path);
if (texture2D)
{
Destroy(texture2D); // this follows the reference and destroys the texture. Else it would just get a new one and the old textures start piling up in your memory, without you being able to remove them.
}
texture2D = new Texture2D(1, 1);
texture2D.LoadImage(pngBytes);
image.sprite = Sprite.Create(texture2D as Texture2D, new Rect(0.0f, 0.0f, texture2D.width, texture2D.height), new Vector2(0.5f, 0.5f), 100.0f);
image.preserveAspect = true;
}
public void LoadImageUnityWebRequest(string path)
{
StartCoroutine(LoadImageCorroutine());
IEnumerator LoadImageCorroutine()
{
using (uwr = UnityWebRequestTexture.GetTexture(#path))
{
yield return uwr.SendWebRequest();
// I would always check for errors first
if (uwr.isHttpError || uwr.isNetworkError)
{
Debug.LogError($"Could not load texture do to {uwr.responseCode} - \"{uwr.error}\"", this);
yield break;
}
// Destroy the current texture instance
if (rawImage.texture)
{
Destroy(texture2D); // this follows the reference and destroys the texture. Else it would just get a new one and the old textures start piling up in your memory, without you being able to remove them.
}
rawImage.texture = DownloadHandlerTexture.GetContent(uwr);
image.sprite = Sprite.Create(rawImage.texture as Texture2D, new Rect(0.0f, 0.0f, rawImage.texture.width, rawImage.texture.height), new Vector2(0.5f, 0.5f), 100.0f);
image.preserveAspect = true;
}
StopCoroutine(LoadImageCorroutine());
}
}
public void LoadImageUnityWebRequestv2(string path)
{
StartCoroutine(LoadImageUnityWebRequestv2Coroutine(path));
}
IEnumerator LoadImageUnityWebRequestv2Coroutine(string MediaUrl) // this comes from https://stackoverflow.com/questions/31765518/how-to-load-an-image-from-url-with-unity
{
UnityWebRequest request = UnityWebRequestTexture.GetTexture(MediaUrl);
yield return request.SendWebRequest();
if (request.isNetworkError || request.isHttpError)
{
Debug.Log(request.error);
}
else
{
rawImage.texture = ((DownloadHandlerTexture)request.downloadHandler).texture;
}
}
// a async version that I grabbed from somewhere, but I dont remember where anymore
[SerializeField] string _imageUrl;
[SerializeField] Material _material;
public async void MyFunction()
{
Texture2D texture = await GetRemoteTexture(_imageUrl);
_material.mainTexture = texture;
}
public static async Task<Texture2D> GetRemoteTexture(string url)
{
using (UnityWebRequest www = UnityWebRequestTexture.GetTexture(url))
{
//begin requenst:
var asyncOp = www.SendWebRequest();
//await until it's done:
while (asyncOp.isDone == false)
{
await Task.Delay(1000 / 30);//30 hertz
}
//read results:
if (www.isNetworkError || www.isHttpError)
{
//log error:
#if DEBUG
Debug.Log($"{ www.error }, URL:{ www.url }");
#endif
//nothing to return on error:
return null;
}
else
{
//return valid results:
return DownloadHandlerTexture.GetContent(www);
}
}
}
}

Using Unity Editor, how do i upload a file from my computer and have it appear on a 3D object or plane?

I found a tutorial on YouTube that accurately added File Explorer and image upload to a 'RawImage' on a canvas using Unity 2017.3.1f1.
What I'm trying to do is add the same image after 'button press' to a 3D object like a cube or plane as shown by the colored cube. When I run the below code, it registers as being present on the cube but doesn't render. Any help is appreciated.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
using UnityEditor;
public class Explorer : MonoBehaviour
{
string path;
public RawImage image;
public void OpenExplorer()
{
path = EditorUtility.OpenFilePanel("Overwrite with png", "", "png");
GetImage();
}
void GetImage()
{
if (path != null)
{
UpdateImage();
}
}
void UpdateImage()
{
WWW www = new WWW("file:///" + path);
image.texture = www.texture;
}
}
There is a tiny bug in your code. It should work sometimes and fail other times. The chances of it working or not depends on the size of the image. It will work if the image is really small but fail when it is a large image.
The reason for this is because of the code in your UpdateImage function. The WWW is supposed to be used in a coroutine function because you need to yield or wait for it to finish loading or downloading the file before accessing the texture with www.texture. Your are not doing this now. Change it to a coroutine function then yield it and it should work fine,.
void GetImage()
{
if (path != null)
{
StartCoroutine(UpdateImage());
}
}
IEnumerator UpdateImage()
{
WWW www = new WWW("file:///" + path);
yield return www;
image.texture = www.texture;
}
If some reason you can't use a coroutine because it's an Editor plugin then forget about the WWW API and use use File.ReadAllBytes to read the image.
void GetImage()
{
if (path != null)
{
UpdateImage();
}
}
void UpdateImage()
{
byte[] imgByte = File.ReadAllBytes(path);
Texture2D texture = new Texture2D(2, 2);
texture.LoadImage(imgByte);
image.texture = texture;
}
To assign the image to a 3D Object, get the MeshRenderer then set the texture to the mainTexture of the material the renderer is using:
//Drag the 3D Object here
public MeshRenderer mRenderer;
void UpdateImage()
{
byte[] imgByte = File.ReadAllBytes(path);
Texture2D texture = new Texture2D(2, 2);
texture.LoadImage(imgByte);
mRenderer.material.mainTexture = texture;
}
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
using UnityEditor;
using System.IO;
public class Explorer : MonoBehaviour
{
string path;
public MeshRenderer mRenderer;
public void OpenExplorer()
{
path = EditorUtility.OpenFilePanel("Overwrite with png", "", "png");
GetImage();
}
void GetImage()
{
if (path != null)
{
UpdateImage();
}
}
void UpdateImage()
{
byte[] imgByte = File.ReadAllBytes(path);
Texture2D texture = new Texture2D (2, 2);
texture.LoadImage(imgByte);
mRenderer.material.mainTexture = texture;
//WWW www = new WWW("file:///" + path);
//yield return www;
//image.texture = texture;
}
}

Cannot make a Rectangle to display with Unity + Oculus Rift

I'm trying to display a simple rectangle right in front of my OVRPlayerController's camera but it seems to be impossible.
I think it might have something to do with the fact that Rect is 2D and my environment is 3D. Does that make sense?
The code is the following (I have ommited the unnecessary stuff):
static int MAX_MENU_OPTIONS = 3;
public GameObject Menu;
private bool showMenu = false;
private float menuIndex = 0;
private bool hasPressedDirectionalPad = false;
public Transform[] buttons = new Transform[MAX_MENU_OPTIONS];
private static Texture2D staticRectTexture;
private static GUIStyle staticRectStyle;
bool DpadIsPressed() {
if (!hasPressedDirectionalPad && Input.GetAxis("DpadY") != 0 && hasPressedDirectionalPad == false){
menuIndex += Mathf.Sign(Input.GetAxis("DpadY")) * (-1);
if (menuIndex < 0) menuIndex = 0;
else if (menuIndex > MAX_MENU_OPTIONS-1) menuIndex = MAX_MENU_OPTIONS-1;
hasPressedDirectionalPad = true;
}
if(Input.GetAxis("DpadY") == 0){
hasPressedDirectionalPad = false;
}
return hasPressedDirectionalPad;
}
void Start() {
Menu.SetActive(false);
staticRectTexture = new Texture2D(1, 1, TextureFormat.RGB24, true);
staticRectStyle = new GUIStyle();
}
void Update() {
if (Input.GetButtonDown("A")) {
DoAction ();
print ("A key was pressed");
}
if (Input.GetButtonDown("Options")) {
showMenu = !showMenu;
if (showMenu) {
Time.timeScale = 0;
menuIndex = 0;
Menu.transform.rotation = this.transform.rotation;
Menu.transform.position = this.transform.position;
} else
Time.timeScale = 1;
}
if (DpadIsPressed ()) {
print ("Dpad key was pressed and menuIndex = " + menuIndex);
}
if (showMenu) {
Menu.SetActive (true);
}
if (!showMenu) {
Menu.SetActive (false);
}
}
void OnGUI() {
if (showMenu) {
Vector3 offset = new Vector3(0, 0, 0.2f);
Vector3 posSelectRectangle = buttons[(int)menuIndex].transform.position + offset;
Rect selectionRectangle = new Rect(posSelectRectangle.x - (float)177/2,
posSelectRectangle.y - (float)43/2,
177.0f, 43.0f);
GUIDrawRect(selectionRectangle, new Color(255.0f, 0, 0));
}
}
void DoAction () {
if (menuIndex == 0)
Salir ();
/*else if (menuIndex == 1)
Guardar ();*/
else if (menuIndex == 2)
Salir ();
}
public static void GUIDrawRect(Rect position, Color color ) {
staticRectTexture.SetPixel( 0, 0, color );
staticRectTexture.Apply();
staticRectStyle.normal.background = staticRectTexture;
GUI.Box( position, GUIContent.none, staticRectStyle );
}
The functions are visited, but the rectangle doesn't show up. Do you see the mistake? Maybe it has something to do with the Oculus Rift?
OnGUI and Screen-space Canvas are not supported in VR mode. This is because there is no way to handle stereoscopic rendering. (Note: They will render to the duplicate display on the user's PC though).
If you want to render in front of the user's camera (like a HUD), you can:
Use a Canvas:
Create a canvas, then add your UI, and set the canvas in world-space. Parent the canvas to the VR Camera game object, and scale it down (it defaults to very very big) and rotate it so it faces the camera.
Or, Use 3D:
Create a 3d Object (Plane, Cube, Quad, whatever!) and parent it to your VR Camera. You can use standard 3d techniques to update it's texture or render texture.

Categories

Resources