GyroToUnity Quaternion Derivation - c#

Recently I develop a VR project so need use gyroscope,here is a code which I found in unity manual:
// The Gyroscope is right-handed. Unity is left handed.
// Make the necessary change to the camera.
void GyroModifyCamera()
{
transform.rotation = GyroToUnity(Input.gyro.attitude);
}
private static Quaternion GyroToUnity(Quaternion q)
{
return new Quaternion(q.x, q.y, -q.z, -q.w);
}
Sorry for m poor math and English, can anyone give me some guide to explain the meaning of GyroToUnity function?

The gyro Input.gyro.attitude sensor value is returned in Right-Handed coordinates but Unity uses the Left-Handed coordinates system. You can read about about both coordinate system here.
A simple Image that illustrates the difference:
The GyroToUnity function is simply used to do conversion from Gyro(Right-Handed coordinates) to Camera (Left-Handed coordinates). It looks like it is flipping the direction of the up/down and left/right values from the Gyro sensor when the device is moving. After it is flipped, the new flipped value is returned and then assigned to the camera.
This is where the conversion/flipping is done:
return new Quaternion(q.x, q.y, -q.z, -q.w);
Hence the -q.z and -q.w. The minus sign is used to flip the values.
What happens if the conversion is performed or if the GyroToUnity function is not used?
If the device is attached to your face and you move your head up, the camera would move down and vice-versa. If you move it right, the camera would move left and vice-versa. This is supposed to be doing the opposite. That's why that function is used.
The easiest way to see what's happening is if you try it yourself. Bypass the GyroToUnity function then test it. You will see the difference.
void GyroModifyCamera()
{
transform.rotation = Input.gyro.attitude;
}
Now, compare it with the original version of the code:
void GyroModifyCamera()
{
transform.rotation = GyroToUnity(Input.gyro.attitude);
}

Related

Use MoveRotation to Look At Another Object Unity3d

Basically I am looking for a simple way for using the rigidbody/physics engine to have my ship look at another object (the enemy). I figured getting the direction between my transform and the enemy transform, converting that to a rotation, and then using the built in MoveRotation might work but it is causing this weird effect where it just kind of tilts the ship. I posted the bit of code as well as images of before and after the attempt to rotate the ship (The sphere is the enemy).
private void FixedUpdate()
{
//There is a user on the ship's control panel.
if (user != null)
{
var direction = (enemyOfFocus.transform.position - ship.transform.position);
var rotation = Quaternion.Euler(direction);
ship.GetComponent<Rigidbody>().MoveRotation(rotation);
}
}
Before.
After.
Well the Quaternion.Euler is a rotation about the given angles, for convenience provided as a Vector3.
Your direction is rather a vector pointing from ship towards enemyOfFocus and has a magnitude so the x, y, z values also depend on the distance between your objects. These are no Euler angles!
What you rather want is Quaternion.LookRotation (the example in the docs is basically exactly your use case ;) )
var direction = enemyOfFocus.transform.position - ship.transform.position;
var rotation = Quaternion.LookRotation(direction);
ship.GetComponent<Rigidbody>().MoveRotation(rotation);

Attempting to add a Dash movement option using Unity and C#

I'm making a very simple platformer game, not to publish or anything like that but rather to experiment with Unity and C#, and I've been trying to make a dash mechanic. two ways that I tried to go about this were
Getting the players position and teleporting them in any one direction, depending on the direction of the dash, didn't work because I couldn't figure out how to find the player's position
Making the player move fast in any one direction, didn't work because of how the rest of the movement script works.
I would prefer to use the first option, does anyone know how to find the players location? I think I was able to find the transform position, but I didn't know how to use it since it was 3 values, x, y, and z, rather than one, and I didn't know how to only get 1. Thanks in advance!
Not a definitive answer, since this depends on the code you are using and i have not shown how to dash, there is a lot of camera code and i am not coding unity anymore, so guessing this out without tests seem wrong, i would recommend adding the code, but the first option is simple enough to an answer.
In the player script, use transform.position, this will not fail since all Unity GameObjects have a world position, and therefore a transform.
// not sure if i spelled correctly
public class Player: Monobehaviour {
/* ... */
void Dash () {
// transform.position is the current position as a 3D vector
var pos = transform.position;
// to access its x, y and z do this:
var x = pos.x;
var y = pos.y;
var z = pos.z;
}
}

Xamarin Orientation Sensor Quaternion

I'm trying to determine a 3d vector Vector3 representing my phones orientation without an angle. What I'm looking for is the vector that figuratively comes out of the back of the phone like a ray, basically a normal vector of the phone.
I know the quaternion gives me a transformation, but to what?
Furthermore, I found Quaternion.ToAxisAngle(), which transforms a quaternion to an axis and its respective roll angle. I thought, great, that's what I need, I can just ignore the angle.
When the phone lies on the table, I get the following axis:
axis = [0,0,-1]
And the angle basically represents the angle of the compass. In that particular situation, that's what I expected. But when the phone has a different arbitrary spatial position, the axis doesn't seem to be the phone's normal vector any more.
How can I calculate a normal vector to the phone's plane?
"Everything is relative" 😎
So what you need to do is save a quaternion and use that as an origin (it is also call centre) and then you can localize any new quaternions to determine what orientation changes have occurred.
Calibration
A calibration can be performed by telling the user to hold the phone steady and then sampling and debouncing a stream of quaternion and averaging them over a period of time. But for this example. just place the device on a table, screen up, before starting the app, and grab the first sample (not great, but for a quickie it works).
Note: A System.Reactive observable works great for sampling and debouncing
Note: Store this quaternion as its inverse (Quaternion.Inverse) as that is one less calculation you have to perform on each sample.
Calc the difference on each sample:
You want to multiply the current sampled quaternion by the origin/centre (in inverse form).
Note: Remember multiplication is non-commutative with quaternions so order matters(!)
var currentQ = e.Reading.Orientation;
var q = Quaternion.Multiply(originQ, currentQ);
Convert your localized quaternion
So now you have a localized quaternion that you can convert to a Vector3 (transform it by a base vector (up, forward, down, ...) or obtain some Euler angles or ...
Example:
So using the Xamarin Essentials sample, this is how I would change the OrientationSensor_ReadingChanged event as a very quick example.
Note: The sampling event is called A LOT depending upon the device and SensorSpeed is really useless on controlling the output rate. If you are directly trying to update the screen with these samples (on a 1-to-1 basis), you could have serious problems (the Mono garbage collector can barely keep up with GC'ing the strings that are created when updating the UI (watch the application output, GC cycles are occurring constantly, even with SensorSpeed.UI set). I use Reactive Observables to smooth the samples and throttle the sensor output to reasonable update cycles (16ms or more) before updating the UI.
void OrientationSensor_ReadingChanged(object sender, OrientationSensorChangedEventArgs e)
{
if (originQ == Quaternion.Identity) // auto-origin on first sample or when requested
{
originQ = Quaternion.Inverse(e.Reading.Orientation);
}
var q = Quaternion.Multiply(originQ, e.Reading.Orientation);
GetEulerAngles(q, out yaw, out pitch, out roll); // assuming "right-hand" orientation
SmoothAndThrottle(yaw, pitch, roll, () =>
{
Device.BeginInvokeOnMainThread(() =>
{
pitchLabel.Text = pitch.ToString();
rollLabel.Text = roll.ToString();
yawLabel.Text = yaw.ToString();
// This will appear to keep the image aligned to the origin/centre.
direction.RotateTo(90 * yaw, 1);
direction.RotationX = 90 * pitch;
direction.RotationY = -90 * roll;
});
});
}
Note: Just sub in your favorite quaternion to Euler angles routine (and write a smoothing and throttling routine if desired).
Output:

How do I get frame-rate independent touch position for Unity?

I am to make a mobile painting game in Unity and I've encountered a serious problem: the Input class in Unity is frame-dependent. Thus I can't get the position of touch frequent enough to make my application draw smoothly; as a result I get something like just points on the background, not connected between each other.
I tried to just connect the points that are detected in Unity, and than my result was just the same points connected with lines, of course. I was trying this in Unity Editor with about 180-200 fps, and on mobile phone with 30-50 fps it looks even worse. I expect that I have to get the touch positions somehow in android studio or Xcode, and only then use them in my C# code in Unity editor.
Am I thinking right to use extern from Unity tools, or there is another easier way to do it directly in Unity? If there is none and I am right, can somebody give me some links to guides/tutorials how to do it and integrate it with Unity? I have never worked outside of Unity and have no experience in integration some external tools with it.
Note: I've tried FixedUpdate without any luck - it doesn't matter how often I try to get the position variables, it is about how often they are updated; I also tried Event.current.mousePosition(in unity editor) in OnGUI method, but it also gave me no difference.
Upd: As I have already said, I need to get positions more frequently than the Input class gives me. It updates not fast enough! Here's what I get without connecting the points. The image shows the mousePosition detection frequency in 180-200 fps. On phones it is even slower!
Upd: Here is my simplified code.
void Draw() //this method is invoked every frame
{
//some calculations of x and y based on Input variables
currentMousePosition = new Vector2( x, y); //current mouse position on sprite
if(currentMousePosition != previousMousePosition)
{
while(currentMousePosition != previousMousePosition)
{
mySprite.texture.SetPixels((int)previousMousePosition.x, (int)previousMousePosition.y, 3,3, myColorArray);
if (currentFrameMousePos.x > previousFrameMousePos.x)
previousFrameMousePos.x++;
if (currentFrameMousePos.x < previousFrameMousePos.x)
previousFrameMousePos.x--;
if (currentFrameMousePos.y > previousFrameMousePos.y)
previousFrameMousePos.y++;
if (currentFrameMousePos.y < previousFrameMousePos.y)
previousFrameMousePos.y--;
}
} else mySprite.texture.SetPixels((int)currentMousePosition.x, (int)currentMousePosition.y, 3,3, myColorArray);
previousMousePosition = currentMousePosition;
}
//mySprite.texture.Apply() is invoked independently in another place to improve performance
The issue is, it is not possible to queue up touch positions that occurred mid frame so by "Quickly" sliding your finger you will miss certain texels on your image. You should look at this line formula Bresenham's line algorithm. This is super fast, and all integer math. Inside your Update() function call this method.
Vector2 oldPoint;
public void UpdateDrawPoint(Vector2 newPoint){
BresenhamLine(newPoint, oldPoint);
oldPoint = newPoint;
}

Unity3D: How to determine the corners of a gameobject in order to position other gameobjects according to it?

My question is about if there is a way to know the coordinates of the corners of a gameobject. What I have is three Vuforia AR targets and a gameobject, a cube in this case.
What I need to achieve, is so that when I move the targets around, the corners of the cube would follow the targets, eg. the cube would be as wide as the space between the targets.
Right now how it does it, is that it checks the distance between the targets and sets the scale of the cube according to it. This results in the cube being expanded always from its set position, which makes the positioning of the targets awkward in real life.
Here's a couple of pictures showing what it does now, taken during execution:
Here is the code attached to the cube-object:
using UnityEngine;
using System.Collections;
public class show : MonoBehaviour {
float distancex;
float distancez;
// Use this for initialization
void Start () {
renderer.enabled = false;
}
// Update is called once per frame
void Update () {
if (GameObject.Find ("target1").renderer.enabled && GameObject.Find ("target2").renderer.enabled &&
GameObject.Find ("target3").renderer.enabled && GameObject.Find ("target4").renderer.enabled)
{
renderer.enabled = true;
}
distancex = Vector3.Distance ((GameObject.Find ("target1").transform.position), (GameObject.Find ("target2").transform.position));
distancez = Vector3.Distance ((GameObject.Find ("target2").transform.position), (GameObject.Find ("target3").transform.position));
Debug.Log (distancex);
Debug.Log (distancez);
transform.localScale = new Vector3((distancex), transform.localScale.y, transform.localScale.z);
transform.localScale = new Vector3(transform.localScale.x, transform.localScale.y, (distancez));
Debug.Log (transform.localScale);
}
}
How do I make the corners of the object follow the targets? I need the width of the side to be the width of the distance between the targets, is there any way to achieve this without using the scale?
I know this is quite some time after you asked the question, but I came across this as I was looking to sort something similar out myself.
What I found I needed (and others may be helped) is to use Renderer.bounds
What it looks like in practice for me:
void Update () {
rend = currentGameObject.GetComponent<Renderer>();
Debug.Log(rend.bounds.max);
Debug.Log(rend.bounds.min);
}
My object in this case was a quad at position 0,0,200 and a scale of 200,200,1. The output of rend.bounds.max is (100.0,100.0,200.0) the min was (-100.0,-100.0,200.0). This gives you the corner position for each corner (granted my example was in 2D space, but this should work for 3d as well).
To get it a little more specific for your example if you wanted the top right corner, you could get the XY of renderer.bounds.max, for the top left you would get the renderer.bounds.max.y and the renderer.bounds.min.x. Hope that helps!
Cheers!
Create 8 empty game objects.
Make them children of the "cube" object to be tracked.
Move them in Editor to be at the 8 corners of your tracked game object. Since they are children of the tracked object, their positions will be {+/- 0.5, +/- 0.5, +/- 0.5}.
Name them memorably (i.e., "top-left-back corner").
These 8 objects will move and scale with the tracked object, staying at its corners. Any time you want to know where a corner is, simply reference one or more of the 8 game objects by name and get the Vector3 for each of their transform.position.
Note: If you're doing step 5 repeatedly, you may want to avoid potentially costly GameObject.Find() calls (or similar) each time. Do them once and save the 8 game objects in a variable.
I worked my way around the problem by determining the position between the targets and counting their mean.
This is not flawless, however, since it is difficult to determine the scale factor with which the models would accurately follow the targets' corners, when bringing external models to use with the system.

Categories

Resources