I'm using the Motion API and I'm trying to figure out a control scheme for the game I'm currently developing.
What I'm trying to achive is for a orienation of the device to collelate directly to a position. Such that tilting the phone forward and to the left represents the top left position and back to the right would be the bottom right position.
Photos to make it clearer (the red dot would be the calculated position).
Forward and Left
Back and Right
Now for the tricky bit. I also have to make sure that the values take into account left landscape and right landscape device orientations (portrait is the default so no calculations would be needed for it).
Has anyone done anything like this?
Notes:
I've tried using the yaw, pitch, roll and Quaternion readings.
I just realised the behaviour I'm talking about would be alot like a level.
Sample:
// Get device facing vector
public static Vector3 GetState()
{
lock (lockable)
{
var down = Vector3.Forward;
var direction = Vector3.Transform(down, state);
switch (Orientation) {
case Orientation.LandscapeLeft:
return Vector3.TransformNormal(direction, Matrix.CreateRotationZ(-rightAngle));
case Orientation.LandscapeRight:
return Vector3.TransformNormal(direction, Matrix.CreateRotationZ(rightAngle));
}
return direction;
}
}
You would like to control object on screen using acceleration sensor.
protected override void Initialize() {
...
Accelerometer acc = new Accelerometer();
acc.ReadingChanged += AccReadingChanged;
acc.Start();
...
}
This is method that calculates position of object
void AccReadingChanged(object sender, AccelerometerReadingEventArgs e) {
// Y axes is same in both cases
this.circlePosition.Y = (float)e.Z * GraphicsDevice.Viewport.Height + GraphicsDevice.Viewport.Height / 2.0f;
// X axes needs to be negative when oriented Landscape - Left
if (Window.CurrentOrientation == DisplayOrientation.LandscapeLeft)
this.circlePosition.X = -(float)e.Y * GraphicsDevice.Viewport.Width + GraphicsDevice.Viewport.Width / 2.0f;
else this.circlePosition.X = (float)e.Y * GraphicsDevice.Viewport.Width + GraphicsDevice.Viewport.Width / 2.0f;
}
I'm using Z axes of sensor as my Y in game and Y axes of sensor as my X in game. Calibration will be done by subtracting Z axes of sensor from center. In this way, our sensor axes directly correspond to position (percentage) on screen.
For this to work we don't need X axes of sensor at all...
This is just quick implementation. You would find center for sensor since this Viewport.Width / 2f isn't center, sum and average of 3 measurements, calibration on X sensor axes so that you could play/use application on flat or some degree position, etc.
This code is tested on Windows Phone Device! (and works)
Related
I need to have a game object point north AND I want to combine this with gyro.attitude input. I have tried, unsuccessfully, to do this in one step. That is, I couldn't make any gyro script, which I found on the net, work with the additional requirement of always pointing north. Trust me, I have tried every script I could find on the subject. I deduced that it's impossible and probably was stupid to think it could be done; at least not this way (i.e. all-in-one). I guess you could say I surmised that you can't do two things at once. Then I thought possibly I could get the same effect by breaking-up the duties. That is, a game object that always points north via the Y axis. Great, got that done like this:
_parentDummyRotationObject.transform.rotation = Quaternion.Slerp(_parentDummyRotationObject.transform.rotation, Quaternion.Euler(0, 360 - Input.compass.trueHeading, 0), Time.deltaTime * 5f);
And with the game object pointing north on the Y, I wanted to add the second game-object, a camera in this case, with rotation using gyro input on the X and Z axis. The reason I have to eliminate the Y axes on the camera is because I get double rotation. With two things rotating at once (i.e. camera and game-object), a 180 degree rotation yielded 360 in the scene. Remember I need the game object to always point north (IRL) based on the device compass. If my device is pointing towards the East, then my game-object would be rotated 90 degrees in the unity scene as it points (rotation) towards the north.
I have read a lot about gyro camera controllers and one thing I see mentioned a lot is you shouldn't try to do this (limit it) on just 1 or 2 axis, when using Quaternions it's impossible when you don't know what you're doing, which I clearly do not.
I have tried all 3 solutions from this solved question: Unity - Gyroscope - Rotation Around One Axis Only and each has failed to rotate my camera on 1 axis to satisfy my rotational needs. Figured I'd try getting 1 axis working before muddying the waters with the 2nd axis. BTW, my requirements are simply that the camera should only rotate on 1 axis (in any orientation) based on the X axis of my device. If I could solve for X, then I thought it'd be great to get Z gyro input to control the camera as well. So far I cannot get the camera controlled on just 1 axis (X). Anyway, here are my findings...
The first solution, which used Input.gyro.rotationRateUnbiased, was totally inaccurate. That is, if I rotated my device around a few times and then put my phone/device down on my desk, the camera would be in a different rotation/location each time. There was no consistency. Here's my code for the first attempt/solution:
<code>
private void Update()
{
Vector3 previousEulerAngles = transform.eulerAngles;
Vector3 gyroInput = Input.gyro.rotationRateUnbiased;
Vector3 targetEulerAngles = previousEulerAngles + gyroInput * Time.deltaTime * Mathf.Rad2Deg;
targetEulerAngles.y = 0.0f;
targetEulerAngles.z = 0.0f;
transform.eulerAngles = targetEulerAngles;
}
</code>
The second solution was very consistent in that I could rotate my device around and then put it down on the desk and the unity camera always ended up in the same location/rotation/state so-to-speak. The problem I had was the camera would rotate on the one axis (X in this case), but it did so when I rotated my device on either the y or x axis. Either type of rotation/movement of my phone caused the unity camera to move on the X. I don't understand why the y rotation of my phone caused the camera to rotate on X. Here is my code for solution #2:
private void Start()
{
Input.gyro.enabled = true;
startEulerAngles = transform.eulerAngles;
startGyroAttitudeToEuler = Input.gyro.attitude.eulerAngles;
}
private void Update()
{
Vector3 deltaEulerAngles = Input.gyro.attitude.eulerAngles - startGyroAttitudeToEuler;
deltaEulerAngles.y = 0.0f;
deltaEulerAngles.z = 0.0f;
transform.eulerAngles = startEulerAngles - deltaEulerAngles;
}
The 3rd solution: I wasn't sure how to complete this last solution, so it never really worked. With the 2 axis zeroed-out, the camera just flipped from facing left to right and back, or top to bottom and back; depending on which axis were commented out. If none of the axis were commented-out (like the original solution) the camera would gyro around on all axis. Here's my code for attempt #3:
private void Start()
{
_upVec = Vector3.zero;
Input.gyro.enabled = true;
startEulerAngles = transform.eulerAngles;
}
private void Update()
{
Vector3 gyroEuler = Input.gyro.attitude.eulerAngles;
phoneDummy.transform.eulerAngles = new Vector3(-1.0f * gyroEuler.x, -1.0f * gyroEuler.y, gyroEuler.z);
_upVec = phoneDummy.transform.InverseTransformDirection(-1f * Vector3.forward);
_upVec.z = 0;
// _upVec.x = 0;
_upVec.y = 0;
transform.LookAt(_upVec);
// transform.eulerAngles = _upVec;
}
Originally I thought it was my skills, but after spending a month on this I'm beginning to think that this is impossible to do. But that just can't be. I know it's a lot to absorb, but it's such a simple concept.
Any ideas?
EDIT: Thought I'd add my hierarchy:
CameraRotator (parent with script) -> MainCamera (child)
CompassRotator (parent) -> Compass (child with script which rotates parent)
I'd do this in the following way:
Camara with default 0, 0, 0 rotation
Screenshot
Object placed at the center of the default position of the camera.
Script for the Camera:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class NewBehaviourScript : MonoBehaviour
{
Camera m_MainCamera;
// Start is called before the first frame update
void Start()
{
// Disable the sleep timeout during gameplay.
// You can re-enable the timeout when menu screens are displayed as necessary.
Screen.sleepTimeout = SleepTimeout.NeverSleep;
// Enable the gyroscope.
if (SystemInfo.supportsGyroscope)
{
Input.gyro.enabled = true;
}
m_MainCamera = Camera.main;
m_MainCamera.enabled = true;
}
// Update is called once per frame
void Update()
{
if (m_MainCamera.enabled)
{
// First - Grab the Gyro's orientation.
Quaternion tAttitude = Input.gyro.attitude;
// The Device uses a 'left-hand' orientation, we need to transform it to 'right-hand'
Quaternion tGyro = new Quaternion(tAttitude.x, tAttitude.y, -tAttitude.z, -tAttitude.w);
// the gyro attitude is tilted towards the floor and upside-down reletive to what we want in unity.
// First Rotate the orientation up 90deg on the X Axis, then 180Deg on the Z to flip it right-side up.
Quaternion tRotation = Quaternion.Euler(-90f, 0, 0) * tGyro;
tRotation = Quaternion.Euler(0, 0, 180f) * tRotation;
// You can now apply this rotation to any unity camera!
m_MainCamera.transform.localRotation = tRotation;
}
}
}
With this script my Object always face SOUTH no matter what.
If you want the object to face NORTH you just have to turn the view 180º on the Y axis as a last rotation:
Quaternion tRotation = Quaternion.Euler(-90f, 0, 0) * tGyro;
tRotation = Quaternion.Euler(0, 0, 180f) * tRotation;
//Face NORTH:
tRotation = Quaternion.Euler(0,180f, 0) * tRotation;
Hope this might help ;)
I'm making a 3D game in unity where the object should move forward and backward as the android device moves/accelerates in the Z axes. ie. When the player moves the devise in the direction of the +ve Z axis, the object should move forward, and when the player moves the devise in the direction of the -ve Z axis, the object should move backward.
This game is a multiplayer game, and the players will move in a large football field.
My idea to do this is using the accelerometer to calculate the acceleration of the device, then integrate the data of acceleration to get the device speed in the Z axis. and use the speed to move the device.
Using this equation
V2=V1 + ΔA . ΔT
Where
V2 : final velocity.
V1 : initial velocity.
ΔA : difference between the initial and final acceleration.
ΔT : difference between the initial and final time.
At first I tried to use kinematic equations to calculate the final speed, but I realized then that it can be only used when acceleration is constant. So a friend of me who studies physics differentiated this equation for me to use it when acceleration is variable.
I know that there will be some error in calculating the accurate displacement, and that the error will increase after the integration of acceleration, but this small percentage of error is okay for my application; I thought at first in using GPS instead of accelerometer but I found that GPS accuracy will be less than the sensors.
I know also that the error will be incredibly high after some time, so I reset the values of acceleration and velocity every 10 seconds. I'm also using a low-pass filter to reduce the noise of the sensor.
public class scriptMove : MonoBehaviour
{
const float kFilteringFactor = 0.1f;
public Vector3 A1;
public Vector3 A2;
public Vector3 A2ramping; // for the low-pass filter
public Vector3 V1;
public Vector3 V2;
public int SpeedFactor=1000; //this factor is for increasing acceleration to move in unity world
void resetAll()
{
Input.gyro.enabled = true;
A2 = Vector3.zero;
V1 = Vector3.zero;
V2 = Vector3.zero;
A2ramping = Vector3.zero;
}
// Use this for initialization
void Start()
{
InvokeRepeating("resetAll", 0, 10);
}
//http://stackoverflow.com/a/1736623
Vector3 ramping(Vector3 A)
{
A2ramping = A * kFilteringFactor + A2ramping * (1.0f - kFilteringFactor);
return A - A2ramping;
}
void getAcceleration(float deltaTime)
{
Input.gyro.enabled = true;
A1 = A2;
A2 = ramping(Input.gyro.userAcceleration) * SpeedFactor;
V2 = V1 + (A2 - A1) * deltaTime;
V1 = V2;
}
//Update is called once per frame
void Update()
{
getAcceleration(Time.deltaTime);
float distance = -1f;
Vector3 newPos = transform.position;
transform.Translate(Vector3.forward * Time.deltaTime * V2.z * distance);
}
}
The problem:
My code doesn't work always as expected when I move with the device;
Sometimes when I move forward (in the +ve Z axis of the device) the object moves forward also, but sometimes it doesn't move at all.
Sometimes when I'm still in my position the object moves alone by itself.
Sometimes when I move forward and suddenly stop, the object does not stop.
My questions:
Are those strange behaviors because of the accuracy of the device, or is there something I'm missing in my code.
If I'm missing something in my code, What is it?
I searched a lot about methods to get the most accurate position of the device, and I found that I can integrate GPS with accelerometer, how can I do this with my code in unity?
I don't know if you still need it but if anyone in the future need I will post what I found:
When I first used the Unity accelerometer I was thinking that the output was simply the device's rotation, and in a way is, but more than that it give us the acceleration but in order to have this value your must filter the gravity then you have the value.
I created a plugin for Android and get the Android's Accelerometer and Linear Accelerometer, the standard accelerometer give us a similar value of Unity accelerometer, the main difference is that is raw, and unity give us some refined output, for example if your game is Landscape unity automatically inverts X and Y axis, while the Android raw information don't. And the Linear accelerometer that is a fusion of sensors including the standard accelerometer, the output is acceleration without the gravity but the speed is terrible, while both (Unity and Android) accelerometer are updated every frame, the Linear accelerometer was updated every 4 to 5 frames what is a terrible rate for user's experience.
But going for Android plugin was great because it gave the light how to solve my problem of removing gravity from Unity Accelerometer, as you can find here:
https://developer.android.com/reference/android/hardware/SensorEvent.html
Under Sensor.TYPE_ACCELEROMETER
If we tilt the device, Unity Accelerometer gives you a value, for example 6, and while you hold in that position this is the value, is not a wave, if you tilt back really fast or really slowly it will give the value from 6 to 0 (supposing you move back to zero), what I wanted and accomplished with the code I'm sharing below is, when you turn it does a wave, returns the acceleration and back to zero, so is a acceleration deceleration curve, if you turn it really slow the acceleration returned is almost zero, if you turn it fast the response reflects this speed. If this is the result you are looking for you just need to create this class:
using UnityEngine;
public class AccelerometerUtil
{
public float alpha = 0.8f;
public float[] gravity = new float[3];
public AccelerometerUtil()
{
Debug.Log("AccelerometerUtil Init");
Vector3 currentAcc = Input.acceleration;
gravity[0] = currentAcc.x;
gravity[1] = currentAcc.y;
gravity[2] = currentAcc.z;
}
public Vector3 LowPassFiltered()
{
/*
https://developer.android.com/reference/android/hardware/SensorEvent.html
gravity[0] = alpha * gravity[0] + (1 - alpha) * event.values[0];
gravity[1] = alpha * gravity[1] + (1 - alpha) * event.values[1];
gravity[2] = alpha * gravity[2] + (1 - alpha) * event.values[2];
linear_acceleration[0] = event.values[0] - gravity[0];
linear_acceleration[1] = event.values[1] - gravity[1];
linear_acceleration[2] = event.values[2] - gravity[2];
*/
Vector3 currentAcc = Input.acceleration;
gravity[0] = alpha * gravity[0] + (1 - alpha) * currentAcc.x;
gravity[1] = alpha * gravity[1] + (1 - alpha) * currentAcc.y;
gravity[2] = alpha * gravity[2] + (1 - alpha) * currentAcc.z;
Vector3 linearAcceleration =
new Vector3(currentAcc.x - gravity[0],
currentAcc.y - gravity[1],
currentAcc.z - gravity[2]);
return linearAcceleration;
}
}
Once you have this class, just create it into your MonoBehaviour:
using UnityEngine;
public class PendulumAccelerometer : MonoBehaviour
{
private AccelerometerUtil accelerometerUtil;
// Use this for initialization
void Start()
{
accelerometerUtil = new AccelerometerUtil();
}
// Update is called once per frame
void Update()
{
Vector3 currentInput = accelerometerUtil.LowPassFiltered();
//TODO: Create your logic with currentInput (Linear Acceleration)
}
}
Notice that the TODO on MonoBehaviour is to be implemented, is up to you create an algorithm how to handle this values, in my case I found really useful to create a Graphic output and analise my acceleration before write it.
Really hope it helps
The movement is based on acceleration, so it will be dependant on how quickly you rotate your device. This is also why the object does not stop when you do. Suddenly stopping your device is a lot of acceleration, which then gets added to the amount the object is translating, which causes it to move a much greater distance than you intend.
I think what may be easier for you is to use the attitude of the gyro rather than the userAcceleration. The attitude returns a quaternion of the rotation of the device.
https://docs.unity3d.com/ScriptReference/Gyroscope-attitude.html
(You'll have to do a bit of experimenting, because I don't know what (0,0,0,0) on the attitude is. It could mean the device is flat on a table, or that it is sideways being held in front of you, or it could simply be the orientation of the device when the app first starts, I don't know how Unity initialises it.)
Once you have that Quaternion, you should be able to adjust velocity directly based off of how far in either direction the user is rotating the device. So if they rotate +ve Z-axis, you move forwards, if they move more, it moves faster, if they move -ve Z-axis, it slows down or moves backwards.
Regarding the GPS coordinates, you need to use LocationService for that.
http://docs.unity3d.com/ScriptReference/LocationService.html
You'll need to start LocationServices, wait for them to initialise (this bit is important), and then you can query the different parts using LocationService.lastData
I am trying to do the same thing as you. It is not trivial to get device's linear acceleration using just one sensor. You will need to implement a solution using both the accelerometer and the gyroscope (sensor fusion). Google has an android specific solution which behaves differently according to how sophisticated your device is. It uses multiple sensors as well as low/high pass filters (see Android TYPE_LINEAR_ACCELERATION sensor - what does it show?).
Google's Tango tablet should have sensors to address such issues.
If you want to get accelerometer data in Unity, try:
public class scriptMove : MonoBehaviour{
private float accelX;
private float accelY;
private float accelZ;
void Update(){
accelX = Input.acceleration.x;
accelY = Input.acceleration.y;
accelZ = Input.acceleration.z;
//pass values to your UI
}
}
What I am currently trying is to port Google's solution to Unity using IKVM.
This link might be helpful too:
Unity3D - Get smooth speed and acceleration with GPS data
I have written this code for rotating and moving the camera.
Sadly, I am not very experianced with matrices and 3D programming, since I started only a few days ago:
plLookAt = new Vector3(plPos.X, plPos.Y, plPos.Z - 20);
if (kb.IsKeyDown(Keys.W))
{
plPos.Z++;
}
if (kb.IsKeyDown(Keys.A))
{
plPos.X++;
}
if (kb.IsKeyDown(Keys.S))
{
plPos.Z--;
}
if (kb.IsKeyDown(Keys.D))
{
plPos.X--;
}
view = Matrix.CreateLookAt(new Vector3(0, 0, 0), plLookAt, Vector3.UnitY);
view = view * Matrix.CreateRotationY(MathHelper.ToRadians(rotations.Y)) * Matrix.CreateRotationX(MathHelper.ToRadians(rotations.X));
view = view *Matrix.CreateTranslation(plPos);
if (PrMS.X < ms.X)
{
rotations.Y++;
}
else if (PrMS.X > ms.X)
{
rotations.Y--;
}
if (PrMS.Y < ms.Y)
{
rotations.X++;
}
else if (PrMS.Y > ms.Y)
{
rotations.X--;
}
plPos is the Player (camera) position
view is the view Matrix
rotations is where the rotation of the camera is saved (Vector3)
PrMS is the MouseState of the previous frame.
This code doesn't work and I think it is because of the order, which the multiplications are in, but I'm not sure. What is the best way of rotating the camera, so that it actually works and I can go in the direction the camera is facing.
Thank You in advance!
Your problem is not in the order of the matrix multiplication, it is that your rotations need to be around camera local axes and your are performing them around world axes.
I think that what you are expecting is that applying .CreateRotationX(rotations.X) and .CreateRotationY(rotationsY) will cause the camera to change pitch and yaw about it's own local axis. But these rotations always cause a rotation about the world axis. If your camera's local X axis just happens to be aligned with the world X axis and you performed a .CreateRotationX(), then it would work as expected. But in your case, you are rotating the camera about the Y axis first and this is throwing the camera's local X axis out of alignment with the world X axis so the next rotation (the X) is not going to go as expected. Even if you did the X first and the Y second, the X would throw the Y out of whack. Although the order of matrix multiplication does matter in general, in your particular case, it is the axis of rotation that is the problem.
It appears you are making a camera that is located at your player's position but can look left/right, up/down by mouse control. Here is a class that offers a different way to approach that criteria:
class Camera
{
public Matrix View { get; set; }
public Matrix Projection { get; set; }
Vector2 centerOfScreen;// the current camera's lookAt in screen space (since the mouse coords are also in screen space)
Vector3 lookAt;
float cameraRotationSpeed;
public Camera()
{
//initialize Projection matrix here
//initialize view matrix here
//initialize centerOfScreen here like this: new Vector2(screenWidth/2, screenHeihgt/2);
cameraRotationSpeed = 0.01f;//set to taste
}
public void Update(Vector3 playerPosition, MouseState currentMouse)
{
Matrix cameraWorld = Matrix.Invert(View);
Vector2 changeThisFrame = new Vector2(currentMouse.X, currentMouse.Y) - centerOfScreen;
Vector3 axis = (cameraWorld.Right * changeThisFrame.X) + (cameraWorld.Up * changeThisFrame.Y);//builds a rotation axis based on camera's local axis, not world axis
float angle = axis.Length() * cameraRotationSpeed;
axis.Normalize();
//rotate the lookAt around the camera's position
lookAt = Vector3.Transform(lookAt - playerPosition, Matrix.CreateFromAxisAngle(axis, angle)) + playerPosition;//this line does the typical "translate to origin, rotate, then translate back" routine
View = Matrix.CreateLookAt(playerPosition, lookAt, Vector3.Up);// your new view matrix that is rotated per the mouse input
}
}
I'm trying to figure out how to create an accurate pinch zoom for my camera in Unity3D/C#. It must be based on the physical points on the terrain. The image below illustrates the effect I want to achieve.
The Camera is a child of a null which scales (between 0,1 and 1) to "zoom" as not to mess with the perspective of the camera.
So what I've come up with so far is that each finger must use a raycast to get the A & B points as well as the current scale of the camera parent.
EG: A (10,0,2), B (14,0,4), S (0.8,0.8,0.8) >> A (10,0,2), B (14,0,4), S (0.3,0.3,0.3)
The positions of the fingers will change but the hit.point values should remain the same by changing the scale.
BONUS: As a bonus, it would be great to have the camera zoom into a point between the fingers, not just the center.
Thanks so much for any help or reference.
EDIT:
I've come up with this below so far but it's not accurate the way I want. It incorporates some of the ideas I had above and I think that the problem is that it shouldn't be /1000 but an equation including the current scale somehow.
if (Input.touchCount == 2) {
if (!CamZoom) {
CamZoom = true;
var rayA = Camera.main.ScreenPointToRay (Input.GetTouch (0).position);
var rayB = Camera.main.ScreenPointToRay (Input.GetTouch (1).position);
int layerMask = (1 << 8);
if (Physics.Raycast (rayA, out hit, 1500, layerMask)) {
PrevA = new Vector3 (hit.point.x, 0, hit.point.z);
Debug.Log ("PrevA: " + PrevA);
}
if (Physics.Raycast (rayB, out hit, 1500, layerMask)) {
PrevB = new Vector3 (hit.point.x, 0, hit.point.z);
Debug.Log ("PrevB: " + PrevB);
}
PrevDis = Vector3.Distance (PrevB, PrevA);
Debug.Log ("PrevDis: " + PrevDis);
PrevScaleV = new Vector3 (PrevScale, PrevScale, PrevScale);
PrevScale = this.transform.localScale.x;
Debug.Log ("PrevScale: " + PrevScale);
}
if (CamZoom) {
var rayA = Camera.main.ScreenPointToRay (Input.GetTouch (0).position);
var rayB = Camera.main.ScreenPointToRay (Input.GetTouch (1).position);
int layerMask = (1 << 8);
if (Physics.Raycast (rayA, out hit, 1500, layerMask)) {
NewA = new Vector3 (hit.point.x, 0, hit.point.z);
}
if (Physics.Raycast (rayB, out hit, 1500, layerMask)) {
NewB = new Vector3 (hit.point.x, 0, hit.point.z);
}
DeltaDis = PrevDis - (Vector3.Distance (NewB, NewA));
Debug.Log ("Delta: " + DeltaDis);
NewScale = PrevScale + (DeltaDis / 1000);
Debug.Log ("NewScale: " + NewScale);
NewScaleV = new Vector3 (NewScale, NewScale, NewScale);
this.transform.localScale = Vector3.Lerp(PrevScaleV,NewScaleV,Time.deltaTime);
PrevScaleV = NewScaleV;
CamAngle();
}
}
Intro
I had to solve this same problem recently and started off with the same approach as you, which is to think of it as though the user is interacting with the scene and we need to figure out where in the scene their fingers are and how they're moving and then invert those actions to reflect them in our camera.
However, what we're really trying to achieve is much simpler. We simply want the to user feel like the area of the screen that they are pinching changes size with the same ratio as their pinch changes.
Aim
First let's summarise our goal and constraints:
Goal: When a user pinches, the pinched area should appear to scale to match the pinch.
Constraint: We do not want to change the scale of any objects.
Constraint: Our camera is a perspective camera.
Constraint: We do not want to change the field of view on the camera.
Constraint: Our solution should be resolution/device independent.
With all that in mind, and given that we know that with a perspective camera objects appear larger when they're closer and smaller when they're further, it seems that the only solution for scaling what the user sees is to move the camera in/out from the scene.
Solution
In order to make the scene look larger at our focal point, we need to position the camera so that a cross-section of the camera's frustum at the focal point is equivalently smaller.
Here's a diagram to better explain:
The top half of the image is the "illusion" we want to achieve of making the area the user expands twice as big on screen. The bottom half of the image is how we need to move the camera to position the frustum in a way that gives that impression.
The question then becomes how far do I move the camera to achieve the desired cross-section?
For this we can take advantage of the relationship between the frustum's height h at a distance d from the camera when the camera's field of view angle in degrees is θ.
Since our field of view angle θ is constant per our agreed constraints, we can see that h and d are linearly proportional.
This is useful to know because it means that any multiplication/division of h is equally reflected in d. Meaning we can just apply our multipliers directly to the distance, no extra calculation to convert height to distance required!
Implementation
So we finally get to the code.
First, we take the user's desired size change as a multiple of the previous distance between their fingers:
Touch touch0 = Input.GetTouch(0);
Touch touch1 = Input.GetTouch(1);
Vector2 prevTouchPosition0 = touch0.position - touch0.deltaPosition;
Vector2 prevTouchPosition1 = touch1.position - touch1.deltaPosition;
float touchDistance = (touch1.position - touch0.position).magnitude;
float prevTouchDistance = (prevTouchPosition1 - prevTouchPosition1).magnitude;
float touchChangeMultiplier = touchDistance / prevTouchDistance;
Now we know by how much the user wants to scale the area they're pinching, we can scale the camera's distance from its focal point by the opposite amount.
The focal point is the intersection of the camera's forward ray and the thing you're zooming in on. For the sake of a simple example, I'll just be using the origin as my focal point.
Vector3 focalPoint = Vector3.zero;
Vector3 direction = camera.transform.position - focalPoint;
float newDistance = direction.magnitude / touchChangeMultiplier;
camera.transform.position = newDistance * direction.normalized;
camera.transform.LookAt(focalPoint);
That's all there is to it.
Bonus
This answer is already very long. So to briefly answer your question about making the camera focus on where you're pinching:
When you first detect a 2 finger touch, store the screen position and related world position.
When zooming, move the camera to put the world position back at the same screen position.
This is a small example:
if (_Touches.Length == 2)
{
Vector2 _CameraViewsize = new Vector2(_Camera.pixelWidth, _Camera.pixelHeight);
Touch _TouchOne = _Touches[0];
Touch _TouchTwo = _Touches[1];
Vector2 _TouchOnePrevPos = _TouchOne.position - _TouchOne.deltaPosition;
Vector2 _TouchTwoPrevPos = _TouchTwo.position - _TouchTwo.deltaPosition;
float _PrevTouchDeltaMag = (_TouchOnePrevPos - _TouchTwoPrevPos).magnitude;
float _TouchDeltaMag = (_TouchOne.position - _TouchTwo.position).magnitude;
float _DeltaMagDiff = _PrevTouchDeltaMag - _TouchDeltaMag;
_Camera.transform.position += _Camera.transform.TransformDirection((_TouchOnePrevPos + _TouchTwoPrevPos - _CameraViewsize) * _Camera.orthographicSize / _CameraViewsize.y);
_Camera.orthographicSize += _DeltaMagDiff * _OrthoZoomSpeed;
_Camera.orthographicSize = Mathf.Clamp(_Camera.orthographicSize, _MinZoom, _MaxZoom) - 0.001f;
_Camera.transform.position -= _Camera.transform.TransformDirection((_TouchOne.position + _TouchTwo.position - _CameraViewsize) * _Camera.orthographicSize / _CameraViewsize.y);
}
In the second video of this tutorial explains it
I am working on a simple game where you click on square sprites before they disappear. I decided to get fancy and make the squares rotate. Now, when I click on the squares, they don't always respond to the click. I think that I need to rotate the click position around the center of the rectangle(square) but I am not sure how to do this. Here is my code for the mouse click:
if ((mouse.LeftButton == ButtonState.Pressed) &&
(currentSquare.Contains(mouse.X , mouse.Y )))
And here is the rotation logic:
float elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds;
RotationAngle += elapsed;
float circle = MathHelper.Pi * 2;
RotationAngle = RotationAngle % circle;
I am new to Xna and programming in general, so any help is appreciated.
Thanks a lot,
Bill
So you're trying to determine if a point is in a rectangle, but when the rectangle is rotated?
The Contains() method will only work if the current rotation is 0 (I guess currentSquare is a rectangle representing the image position without rotation?).
What you will have to do is do the opposite rotation of the image on the mouse coordinates (the mouse coordinates should rotate around the origin of your image), then calculate if the new position is within currentSquare. You should be able to do all of this using vectors.
(Untested)
bool MouseWithinRotatedRectangle(Rectangle area, Vector2 tmp_mousePosition, float angleRotation)
{
Vector2 mousePosition = tmp_mousePosition - currentSquare.Origin;
float mouseOriginalAngle = (float)Math.Atan(mousePosition.Y / mousePosition.X);
mousePosition = new Vector2((float)(Math.Cos(-angleRotation + mouseOriginalAngle) * mousePosition.Length()),
(float)(Math.Sin(-angleRotation + mouseOriginalAngle) * , mousePosition.Length()));
return area.Contains(mousePosition);
}
If you dont need pixel pefect detection you can create bounding sphere for each piece like this.
var PieceSphere = new BoundingSphere()
{
Center =new Vector3(new Vector2(Position.X + Width/2, Position.Y + Height/2), 0f),
Radius = Width / 2
};
Then create another bounding sphere around mouse pointer.For position use mouse coordinates and for radius 1f. Because mouse pointer will be moving it will change its coordinates so you have to also update the sphere's center on each update.
Checking for clicks would be realy simple then.
foreach( Piece p in AllPieces )
{
if ((mouse.LeftButton == ButtonState.Pressed) && p.BoundingSphere.Intersects(MouseBoundingSphere))
{
//Do stuff
}
}
If you are lazy like me you could just do a circular distance check.
Assuming mouse and box.center are Vector2
#gets us C^2 according to the pythagorean Theorem
var radius = (box.width / 2).squared() + (box.height / 2).square
#distance check
(mouse - box.center).LengthSquared() < radius
Not perfectly accurate but the user would have a hard time noticing and inaccuracies that leave a hitbox slightly too large are always forgiven. Not to mention the check is incredibly fast just calculate the radius when the square is created.