Oculus Go's controller not detected by OVRInput.IsControllerConnected() - c#

I have a Unity game that runs on both Android Cardboard and Oculus Go. I'm trying to determine whether the Go's controller is connected.
I imported the Oculus integration package from the Unity asset store (though I'm not sure it's actually required... I've gotten the impression that Oculus support has been built into Unity since at least 2018.3, if not 2018.2 or earlier). I also deleted Cardboard and added Oculus as Virtual Reality SDKs in Player settings.
The following code executes in the Start() method that initializes most of my game:
void Start() {
// ...
if (OVRInput.IsControllerConnected(OVRInput.Controller.RTrackedRemote)) {
// do something visible
}
// ...
}
The problem is, OVRInput.IsControllerConnected(...) always returns false, and the code inside the block never executes.
Other things I've tried:
Moved the call to OVRInput.IsControllerConnected() from Start() to Update(), just in case it's an initialization-time issue. No success. Same result.
Instead of using OVRInput.Controller.RTrackedRemote as the argument, I tried the other objects... LTrackedRemote, Active, All, Gamepad, LTouch, RTouch, Remote, Touch, Touchpad, and None. All of them except '.None' returned false. '.None' returned true.
I set a breakpoint on the line calling OVRInput.IsControllerConnected() (after moving it to Update()), then called OVRInput.GetConnectedControllers() in VS2017's immediate window... it returned "None". Ditto for OVRInput.GetActiveController().
The game itself began as Android Cardboard. So far, the only major changes I've made to it are:
Importing the Oculus support library from Unity's Asset store.
In Player -> XR Settings, I deleted "Cardboard" and added "Oculus" as a VR SDK
In Build Settings, I changed the build method from 'Gradle' to 'Internal' (Gradle builds failed... I've seen posts from summer 2018 saying it's a Unity bug, but I'm not sure whether that's still current info... regardless, changing from Gradle to Internal made THAT error go away).
Most notably, I have NOT added any Oculus-specific prefabs, or changed/removed any of the GoogleVR-specific prefabs.

I know you tried moving IsControllerConnected to Update but did you try GetConnectedControllers in Update after a second? That's what did the trick for me. So in Update():
// initialize hand once after one second of start
if(!handInitialised){
initialWait += Time.deltaTime;
if(initialWait > 1f){
OVRInput.Controller c = OVRInput.GetConnectedControllers();
if(c == OVRInput.Controller.LTrackedRemote || c == OVRInput.Controller.LTouch){
//
}
//
handInitialised = true;
}
}

Related

SpatialCoordinateSystem.TryGetTransformTo() from Webcam to Unity space fails in a non-initial, separated spatial environment on HoloLens 2

I have a quite specific problem regarding to a transformation matrix for transformations from the HoloLens 2 webcam space into the current Unity scene space in a Unity+MRTK+OpenXR app. The goal is to acquire the exact camera pose related to a camera frame, which was acquired through Windows.Media.Capture, in Unity space.
My environment:
Unity 2021.3.8.
MRTK v2.8.2
Mixed Reality OpenXR Plug-In v1.6.0
For obtaining the matrix, I first receive a Windows.Perception.Spatial.SpatialCoordinateSystem instance (unityReferenceCoordinateSystem) representing the Unity Space through the MR OpenXR Plug-In as described HERE:
using Windows.Perception.Spatial;
using Microsoft.MixedReality.OpenXR;
SpatialCoordinateSystem unityReferenceCoordinateSystem = PerceptionInterop.GetSceneCoordinateSystem(Pose.identity) as SpatialCoordinateSystem;
and I obtain the camera space (cameraCoordinateSystem) from the Windows.Media.Capture.Frames.MediaFrameReference camera frame instance acquired from a MediaFrameReader by
MediaFrameReference mediaFrame; // acquired camera frame
SpatialCoordinateSystem cameraCoordinateSystem = mediaFrame.CoordinateSystem;
Finally I obtain the required transformation matrix by using SpatialCoordinateSystem.TryGetTransformTo() as you can see in my complete method:
using Microsoft.MixedReality.Toolkit;
public bool TryGetCameraToUnityMatrix(out Matrix4x4 cameraToUnity)
{
// (obtain MediaFrameReader, acquire a camera frame and obtain
// unityReferenceCoordinateSystem and cameraCoordinateSystem as described above)
System.Numerics.Matrix4x4? camToUnitySysMatrix = cameraCoordinateSystem.TryGetTransformTo(unityReferenceCoordinateSystem);
if (!camToUnitySysMatrix.HasValue)
{
return false;
}
cameraToUnity = camToUnitySysMatrix.Value.ToUnity();
return true;
}
This works all fine so far - until I bring the HoloLens into another spatial environment, which is not connected to the environment, which was present when the app was started.
Describing the following scenario should make clear what I mean by that:
Start the app on HL2
Acquire the cameraToUnity matrix as described --> works fine
Set the HL to stand-by
Go to another room, for which the HL's spatial awareness does not know the connection between these two rooms
Wake up HL and open the (still running) app.
Acquire the cameraToUnity matrix. --> FAILS:
camToUnitySysMatrix.HasValue returns false (even though both arguments unityReferenceCoordinateSystem and cameraCoordinateSystem are not null.)
Set the HL to stand-by again
Go back to the initial environment where the app was originaly started
Wake up HL and open the (still running) app.
Acquire the cameraToUnity matrix as described --> works fine again! (camToUnitySysMatrix has valid value again)
I also made sure that unityReferenceCoordinateSystem = PerceptionInterop.GetSceneCoordinateSystem(Pose.identity) is re-called after I changed the environment and also the MediaFrameReader gets freshly instantiated origining from a new MediaCapture instance.
But obviously a transformation between the two SpatialCoordinateSystems seems to fail if it is attempted in the non-initial spatial environment.
Any ideas on how to solve this?
UPDATE
A minimal Unity sample project for reproducing this problem can be found here:
https://github.com/pjaydev/trygettransformto-so
Usually, users should wear the HoloLens device and keep it turned on while moving and operating, so that the device can fully understand the environment. And HoloLens also has some requirements on the size of the site used, if the room is too small, HoloLens may not work properly. If you have confusion about this issue or there is business impact, you can submit a ticket via aka.ms/HLSupport.

Disable/Enable VR from code using SteamVR 2.0.1

As the title says, I'm trying to enable/disable VR between different applications, and I need to do it as many times as I want.
I'm using Unity 2017.4 and SteamVR 2.0.1. I'm trying to do it with two different scenes of the same project (testing one in the editor, and launching the other as .exe).
This solution is not working, since apparently Actions and Poses are not handled correctly when VR is stopped with XRSettings.enabled = false.
Did anyone experienced the same behaviour?
I tried to find a workaround:
1) Disabling/enabling also Player and Hands
...
// ** ENABLE VR **
if (enable)
{
print("Enabling VR ...");
XRSettings.LoadDeviceByName("OpenVR");
yield return null;
print("Loaded device: " + XRSettings.loadedDeviceName);
XRSettings.enabled = enable;
EnablePlayerAndHands(true);
}
// ** DISABLE VR **
else
{
print("Disabling VR ...");
EnablePlayerAndHands(false);
XRSettings.LoadDeviceByName("");
yield return null;
print("Loaded device: " + XRSettings.loadedDeviceName);
XRSettings.enabled = false;
}
...
2) Added these lines in the SteamVR.cs file:
private void Dispose(bool disposing)
{
...
// added code
SteamVR_Input.initialized = false;
SteamVR_Behaviour.instance = null;
}
(In order to make it work, I had to add a public setter for the SteamVR_Behaviour.instance property).
3) In SteamVR_Behaviour, I added a check inside Update(), LateUpdate() and FixedUpdate():
if (_instance != null) ... // do update
These modifications won't fix the problems actually, because I still have some exceptions when I enable back VR, for example:
GetPoseActionData error (/actions/default/in/SkeletonLeftHand): InvalidHandle handle: 1152990670760182193
UnityEngine.Debug:LogError(Object)
Valve.VR.SteamVR_Action_Pose:UpdateValue(SteamVR_Input_Sources, Boolean) (at Assets/SteamVR/Input/SteamVR_Action_Pose.cs:96)
Valve.VR.SteamVR_Action_Skeleton:UpdateValue(SteamVR_Input_Sources, Boolean) (at Assets/SteamVR/Input/SteamVR_Action_Skeleton.cs:75)
Valve.VR.SteamVR_Input:UpdateSkeletonActions(SteamVR_Input_Sources, Boolean) (at Assets/SteamVR/Input/SteamVR_Input.cs:487)
Valve.VR.SteamVR_Input:UpdateSkeletonActions(Boolean) (at Assets/SteamVR/Input/SteamVR_Input.cs:462)
Valve.VR.SteamVR_Input:LateUpdate() (at Assets/SteamVR/Input/SteamVR_Input.cs:352)
Valve.VR.SteamVR_Behaviour:LateUpdate() (at Assets/SteamVR/Scripts/SteamVR_Behaviour.cs:224)
...but they are raised just a few times and then they stop. It could be due to some bad timing. Btw, I put an Interactable gameobject inside the empty scene just to test if I could still interact with it after disabling/enabling, and it seems that I can.
Still, I would expect some easier and cleaner method to achieve my goal. Am I missing something obvious or is it a bug from SteamVR newest version?
Thanks in advance for any help.
Please see this link for reference
https://docs.unity3d.com/ScriptReference/XR.XRSettings-enabled.html
Stopping a VR session is not supported in GearVR, not sure about SteamVR

Issue with moving clients/players with Photon Networking in Unity

I'm making a game where players are put onto plates and random events happen to the plates/players. I've gone through a multitude of different networking solutions like UNET, Mirror, Mirror+Steamworks P2P but none of them really worked well (main issue being not being able to join via IP) so I settled on Photon. After Punv2 just wouldn't work on my project (an error in PhotonEditor.CS) I just used Punv1 which is working perfectly for the most part.
The issue right now is that I'm trying to spawn players on their owned plates but they only spawn on the first plate despite each plate being owned by a player (each plate has a script that specifies which player 'owns' it. This is being set correctly). This only seems to happen when I try to test with a real player/client. If I create a fake player by just spawning in the prefab and then running it, both players will be moved to their correct plate so it seems to be a networking issue.
Here's the code responsible for moving the players to their plates.
foreach (GameObject plate in spawnedPlates)
{
//Here we loop through each plate, get the player assigned to it and move the player to that plate.
GameObject player = plate.GetComponent<Plate>().assignedTo;
PlayerClientControl playerController = player.GetComponent<PlayerClientControl>(); // originally for UNET/Mirror. Left in incase I need it(had an RPC function that moved the player).
player.transform.position = plate.transform.position + new Vector3(0, 4, 0);
}
What am I doing wrong?
EDIT:
It seems they ARE being moved to the correct positions (or atleast, it's trying to move them to the correct position) via a debug print statement that prints where they are trying to be teleported to further cementing that this is a networking issue but I have no idea how to fix it.
I assume it's something to with host/client synchronization? If anyone could shed some light on this, that'd be great because I'm pulling my hair out over this.
If you inspect the player objects at runtime, it will probably give you a clue. I had the same and in my case I had InputAction attached to my players (taken from unity StarterAssets). The first player in the game was assigned keyboard and the second a controller. I solved it by disabling all the inputaction and enabling the one of the localplayer:
private void Awake()
{
photonView = GetComponent<PhotonView>();
if (photonView.IsMine)
{
// take control of the user input by disabling all and then enabling mine
var players = FindObjectsOfType<Player>();
foreach (Player player in players)
{
player.GetComponent<PlayerInput>().enabled = false;
}
GetComponent<PlayerInput>().enabled = true;

Unity:Time.timescale =0 not working

I am trying to display player tutorial when the user 1st starts the game using playerpref and want the game to be paused, the problem i am facing is that Time.timescale=0 is not pausing the game when placed inside start (tutorialCanvas gets displayed), but works when is called by a button(Pause button).
Following is the code I use
void Start()
{
if (PlayerPrefs.HasKey ("test4") ==false ) {
tutorialCanvas.SetActive (true);
Time.timeScale = 0;
}
}
Time.timeScale = 0 is really messed up for me so what i could recommend you is that if you just want to pause something like pausing a movement of a character then you can try it like this :
GameObject PlayerScript;
if(Input.GetKey(KeyCode.P)){
//lets disable the playermovement script only not the whole object
PlayerScript = GetComponent<PlayerMovement>().enabled = false;
}
I've been stucked there also so i made an alternative way. And you can also visit this one. If you want a free of that asset you can get it here
I had this issue recently, although it was only doing it on a build, and not in the editor. Changing timeScale to something like 0.0001f would fix it, but that wasn't a great solution. Creating an input settings asset (under Camera -> Player Input -> Open Input Settings) seemed to have fixed it for the builds.

Unity Start nog called on real build

I've made a dice app as a project to learn to work with unity (it was so good in my eyes that I put it on the google play store) but when I downloaded it from there, the Start function of at least 2 scripts isn't called and I have no idea whether the other Start functions are being called.
Here you can see 2 of the Start functions that aren't called
void Start()
{
light = light.GetComponent<Light>();
GetComponent<Button>().onClick.AddListener(TaskOnClick);
rawImage = GetComponent<RawImage>();
isLockMode = false;
rawImage.texture = getIconLock(isLockMode);
}
void Start()
{
Screen.fullScreen = false;
Dice.AddDie();
Input.gyro.enabled = true;
GlobalConfig.onShowShadowsChanged += onShadowsEnabledChange;
}
They work when I use Unity Remote on my smartphone and they also work when I just use unity without the remote...
the first script is attached to a UI element and the second script is attached to an empty GameObject called 'App'
It's also even more weird because they used to work but then I switched pc's (but used the same code).
I think something is wrong with the building itself
Found out the problem, the build was generating two files, an .apk and an other .odb or somethingl ike that (might be another extension). I had to untick 'split application binary' in the player settings

Categories

Resources