My XNA game uses Farseer Physics, which is a 2d physics engine with an optional renderer for physics engine data, to help you debug. Visual debug data is very useful, so I have it setup to be drawn according to my camera's state. This works perfectly, except for z axis rotation. See, I have a camera class that supports movement, zoom, and z axis rotation. My debug class uses the Farseer's debug renderer to create matrices that make the debug data be drawn according to the camera, and it does it well, except for one thing.. the z axis rotation uses the top-left corner of the screen for (0, 0), while my camera rotates using the center of the viewport as (0, 0). Does anyone have any tips for me? If I can make the debug drawer rotate from the center, it would work perfectly with my camera.
public void Draw(Camera2D camera, GraphicsDevice graphicsDevice)
{
// Projection (location and zoom)
float width = (1f / camera.Zoom) * ConvertUnits.ToSimUnits(graphicsDevice.Viewport.Width / 2);
float height = (-1f / camera.Zoom) * ConvertUnits.ToSimUnits(graphicsDevice.Viewport.Height / 2);
//projection = Matrix.CreateOrthographic(width, height, 1f, 1000000f);
projection = Matrix.CreateOrthographicOffCenter(
-width,
width,
-height,
height,
0f, 1000000f);
// View (translation and rotation)
float xTranslation = -1 * ConvertUnits.ToSimUnits(camera.Position.X);
float yTranslation = -1 * ConvertUnits.ToSimUnits(camera.Position.Y);
Vector3 translationVector = new Vector3(xTranslation, yTranslation, 0f);
view = Matrix.CreateRotationZ(camera.Rotation) * Matrix.Identity;
view.Translation = translationVector;
DebugViewXNA.RenderDebugData(ref projection, ref view);
}
One common approach to solving these sort of issues is to move the object in question to the 'centre', rotate and the move it back.
So in this case, I'd suggest applying a transformation that moves the camera "up and across" by half the screen dimensions, apply the rotation and then move it back.
In general, in order to perform rotation around point (x, y, z), the operation needs to be broken down into 3 conceptual parts:
T is a translation matrix that translates by (-x, -y, -z)
R is a rotation matrix that rotates around the relevant axis.
T^-1 is the matrix that translates back to (x, y, z)
The matrix you're after is the result of the multiplication of these 3, in reverse order:
M = T^-1 * R ^ T
The x,y,z you should use are your camera's position.
Related
I would like to recreate one on one the rotation of the real life controller joystick (i.e. 360 controller) into a 3D joystick mesh (that resembles the 360 controller one).
I thought about doing it by rotating the joystick in the X axis according to the magnitude of the input (mapping it to a min and max rotation in the X axis). And then figure the angle of the input and apply it to the Y axis of the 3D joystick.
This is the code I have, the joystick tilts properly in the X axis but the rotation in the Y axis doesn't work:
public void SetStickRotation(Vector2 stickInput)
{
float magnitude = stickInput.magnitude;
// This function converts the magnitude to a range between the min and max rotation I want to apply to the 3D stick in the X axis
float rotationX = Utils.ConvertRange(0.0f, 1.0f, m_StickRotationMinX, m_StickRotationMaxX, magnitude);
float angle = Mathf.Atan2(stickInput.x, stickInput.y);
// I try to apply both rotations to the 3D model
m_Stick.localEulerAngles = new Vector3(rotationX, angle, 0.0f);
}
I am not sure why is not working or even if I am doing it the right way (i.e. perhaps there is a more optimal way to achieve it).
Many thanks for your input.
I would recommend rotating it by an amount determined by the magnitude around a single axis determined by the direction. This will avoid the joystick spinning around, which would be especially noticeable in cases of asymmetric joysticks such as pilots joysticks:
Explanation in comments:
public void SetStickRotation(Vector2 stickInput)
{
/////////////////////////////////////////
// CONSTANTS (consider making a field) //
/////////////////////////////////////////
float maxRotation = 35f; // can rotate 35 degrees from neutral position (up)
///////////
// LOGIC //
///////////
// Convert input to x/z plane
Vector3 stickInput3 = new Vector3(stickInput.x, 0f, stickInput.y);
// determine axis of rotation to produce that direction
Vector3 axisOfRotation = Vector3.Cross(Vector3.up, stickInput3);
// determine angle of rotation
float angleOfRotation = maxRotation * Mathf.Min(1f, stickInput.magnitude);
// apply that rotation to the joystick as a local rotation
transform.localRotation = Quaternion.AngleAxis(angleOfRotation, axisOfRotation);
}
This will work for joysticks where:
the direction from its axle to its end is the local up direction,
it should have zero (identity) rotation on neutral input, and
stickInput with y=0 should rotate the knob around the stick's forward/back axis, and stickInput with x=0 should rotate the knob around the stick's left/right axis.
Figure out the problem, atan2 returns the angle in radiants, however the code assumes it is euler degrees, as soon as I did the conversion it worked well.
I put the code here if anyone is interested (not the change in the atan2 function):
public void SetStickRotation(Vector2 stickInput)
{
float magnitude = stickInput.magnitude;
// This function converts the magnitude to a range between the min and max rotation I want to apply to the 3D stick in the X axis
float rotationX = Utils.ConvertRange(0.0f, 1.0f, m_StickRotationMinX, m_StickRotationMaxX, magnitude);
float angle = Mathf.Atan2(direction.x, direction.y) * Mathf.Rad2Deg;
// Apply both rotations to the 3D model
m_Stick.localEulerAngles = new Vector3(rotationX, angle, 0.0f);
}
For a project in Unity3D I'm trying to transform all objects in the world by changing frames. What this means is that the origin of the new frame is rotated, translated, and scaled to match the origin of the old frame, then this operation is applied to all other objects (including the old origin).
For this, I need a generalized, 3-dimensional (thus 4x4) Transformation-Matrix.
I have looked at using Unity's built-in Matrix4x4.TRS()-method, but this seems useless, as it only applies the Translation, Rotation & Scale to a defined point.
What I'm looking for, is a change of frames, in which the new frame has a different origin, rotation, AND scale, with regards to the original one.
To visualize the problem, I've made a small GIF (I currently have a working version in 3D, without using a Matrix, and without any rotation):
https://gyazo.com/8a7ab04dfef2c96f53015084eefbdb01
The values for each sphere:
Origin1 (Red Sphere)
Before:
Position (-10, 0, 0)
Rotation (0,0,0)
Scale (4,4,4)
After:
Position (10, 0, 0)
Rotation (0,0,0)
Scale (8,8,8)
-
Origin2 (Blue Sphere)
Before:
Position (-20, 0, 0)
Rotation (0,0,0)
Scale (2,2,2)
After:
Position (-10, 0, 0)
Rotation (0,0,0)
Scale (4,4,4)
-
World-Object (White Sphere)
Before:
Position (0, 0, 10)
Rotation (0,0,0)
Scale (2,2,2)
After:
Position (30, 0, 20)
Rotation (0,0,0)
Scale (4,4,4)
Currently I'm simply taking the Vector between the 2 origins, scaling that to the difference between the two origins, then applying that on top of the new position of the original (first) origin.
This will of course not work when rotation is applied to any of the 2 origins.
// Position in original axes
Vector3 positionBefore = testPosition.TestPosition - origin.TestPosition;
// Position in new axes
Vector3 positionAfter = (positionBefore * scaleFactor) + origin.transform.position;
What I'm looking for is a Matrix that can do this (and include rotation, such that Origin2 is rotated to the rotation Origin1 was in before the transformation, and all other objects are moved to their correct positions).
Is there a way to do this without doing the full calculation on every Vector (i.e. transforming the positionbefore-Vector)? It needs to be applied to a (very) large number of objects every frame, thus it needs to be (fairly) optimized.
Edit: Scaling will ALWAYS be uniform.
There might be other solutions but here is what I would do
Wrap your objects into the following hierarchy
WorldAnchorObject
|- Red Sphere
|- Blue Sphere
|- White Sphere
Make sure the WorldAnchorObject has
position: 0,0,0
rotation: 0,0,0
localScale: 1,1,1
position/rotate/scale the Spheres (this will now happen relative to WorldAnchorObject)
Now all that is left is to transform the WorldAnchorObject -> it will move, scale and rotate anything else and keeps the relative transforms intact.
How exactly you move the world anchor is your thing. I guess you want to allways center and normalize a certain child object. Maybe something like
public void CenterOnObject(GameObject targetCenter)
{
var targetTransform = targetCenter.transform;
// The localPosition and localRotation are relative to the parent
var targetPos = transform.localPosition;
var targetRot = targetTransform.localRotation;
var targetScale = targetTransform.localScale;
// First reset everything
transform.position = Vector3.zero;
transform.rotation = Quaternion.Idendity;
transform.localScale = Vector3.one;
// set yourself to inverted target position
// After this action the target should be on 0,0,0
transform.position = targetPos * -1;
// Scale yourself relative to 0,0,0
var newScale = Vector3.one * 1/targetScale.x;
ScaleAround(gameObject, Vector3.zero, newScale);
// Now everything should be scaled so that the target
// has scale 1,1,1 and still is on position 0,0,0
// Now rotate around 0,0,0 so that the rotation of the target gets normalized
transform.rotation = Quaternion.Inverse(targetRotation);
}
// This scales an object around a certain pivot and
// takes care of the correct position translation as well
private void ScaleAround(GameObject target, Vector3 pivot, Vector3 newScale)
{
Vector3 A = target.transform.localPosition;
Vector3 B = pivot;
Vector3 C = A - B; // diff from object pivot to desired pivot/origin
float RS = newScale.x / target.transform.localScale.x; // relative scale factor
// calc final position post-scale
Vector3 FP = B + C * RS;
// finally, actually perform the scale/translation
target.transform.localScale = newScale;
target.transform.localPosition = FP;
}
Now you call it passing one of the children like e.g.
worldAnchorReference.CenterOnObject(RedSphere);
should result in what you wanted to achieve. (Hacking this in on my smartphone so no warranties but if there is trouble I can check it as soon as I'm with a PC again. ;))
Nevermind..
Had to apply the rotation & scale to the Translation before creating the TRS
D'Oh
I'm making a post-processing shader (in unity) that requires world-space coordinates. I have access to the depth information of a certain pixel, as well as the onscreen location of that pixel. How can I find the world position that that pixel corresponds to, much like the function ViewportToWorldPos()?
It's been three years! I was working on this recently, and an older engineer help me solved the problem. Here is the code.
We need to firstly give a camera transform matrix to the shader in script:
void OnRenderImage(RenderTexture src, RenderTexture dst)
{
Camera curruntCamera = Camera.main;
Matrix4x4 matrixCameraToWorld = currentCamera.cameraToWorldMatrix;
Matrix4x4 matrixProjectionInverse = GL.GetGPUProjectionMatrix(currentCamera.projectionMatrix, false).inverse;
Matrix4x4 matrixHClipToWorld = matrixCameraToWorld * matrixProjectionInverse;
Shader.SetGlobalMatrix("_MatrixHClipToWorld", matrixHClipToWorld);
Graphics.Blit(src, dst, _material);
}
Then we need the depth information to transform the clip pos. Like this:
inline half3 TransformUVToWorldPos(half2 uv)
{
half depth = tex2D(_CameraDepthTexture, uv).r;
#ifndef SHADER_API_GLCORE
half4 positionCS = half4(uv * 2 - 1, depth, 1) * LinearEyeDepth(depth);
#else
half4 positionCS = half4(uv * 2 - 1, depth * 2 - 1, 1) * LinearEyeDepth(depth);
#endif
return mul(_MatrixHClipToWorld, positionCS).xyz;
}
That's all.
have a look at this tutorial: http://flafla2.github.io/2016/10/01/raymarching.html
essentially:
store one vector per corner of the screen, passed as constants, that goes from the camera position to said corner.
interpolate the vectors based on the screen space position, or the uvs of your screenspace quad
compute final position as cameraPosition + interpolatedVector * depth
I have a plane defined by a normal vector and another normalalised direction vector that is moving along that plane, both in 3D space.
I'm trying to figure out how to project that normal direction 3D vector onto the plane such that it ends up being a 2D vector with x/y coordinates.
It sounds like you need to find the angle between the direction vector and the plane. The size of the projection is going to scale with the cosine of that angle. Since the normal vector of the plane is perpendicular, I think you can find the sine between the normal vector and your direction vector.
The angle between the two vectors is given by the dot product of the vectors over the magnitudes multiplied together. That gives us our theta. Take the sin of theta, and we have the scaling factor (I'll call it s)
Next, you need to define unit size vectors on the plane to project onto. It's probably easiest to do this by setting one of the unit vectors in the direction of the projection to move forward...
If you set the unit vector in the direction of the projection, then you know the length of the projection in that unit space by using the scaling factor and multiplying by the length of the vector.
After that, with the unit vector, multiply in the length and find your vector relative to your normally defined xyz axis.
I hope this helps.
Try something like this. I wrote a paper on this exact method a while ago and can provide you with a copy if you would like.
PointF Transform32(Point3 P)
{
float pX = (float)(((V.J * sxy) - V.I * cxy) * zoom);
float pY = (float)(((V.K * cz) - (V.I * sxy * sz) - (V.J * sz * cxy)));
return new PointF(Origin.X + pX, Origin.Y - pY);
}
cxy is the cosine of the x-y camera angle, measured in radians from the positive x-axis on the xy plane.
sxy is the sine of the x-y camera angle.
cz is the cosine of the z camera angle, measured in radians from the x-y plane (so the angle is zero if the camera rests on that plane).
sz is the sine of the z camera angle.
Alternatively:
Vector3 V = new Vector3(P.X, P.Y, P.Z);
Vector3 R = Operator.Project(V, View);
Vector3 Q = V - R;
Vector3 A = Operator.Cross(View, zA);
Vector3 B = Operator.Cross(A, View);
int pY = (int)(Operator.Dot(Q, B) / B.GetMagnitude());
int pX = (int)(Operator.Dot(Q, A) / A.GetMagnitude());
pY and pX should be your coordinates. Here, vector V is the position vector of the point in question, R is the projection of that vector onto your viewing vector, Q is the component of V orthogonal to the viewing Vector, A is an artificial X-axis formed by the cross-product of the viewing vector with the vector (0,0,1), and B is an artificial Y-axis formed by the cross product of A and (0,0,1).
It sounds like what you're looking for is something like a simple rendering engine, similar to this, which used the above formulae:
Hope this helps.
The following function calculates the target vector of my FPS camera to put in the OpenGL LookAt method. Camera orientation (in radians) (0,0,0) means the camera is parallel to the z axis looking in the negative direction and the camera right vector is parallel to the x axis in the positive direction.
static Matrix4 GetViewMatrix()
{
Vector3 cameraup = Vector3.Transform(Vector3.UnitY,(Quaternion.FromAxisAngle(LineOfSightVector, Orientation.Z)));
LineOfSightVector.X = (float)(Math.Sin((float)Orientation.X) * Math.Cos((float)Orientation.Y));
LineOfSightVector.Y = (float)Math.Sin((float)Orientation.Y);
LineOfSightVector.Z = (float)(Math.Cos((float)Orientation.X) * Math.Cos((float)Orientation.Y));
return Matrix4.LookAt(Position, Position + LineOfSightVector, cameraup) * View; //View = createperspectivefield of view matrix4
}
It works fine when the camera y axis is (0,1,0). However I have introduced a Z value to my camera orientation (roll). I use that to get the "cameraup" vector. I now need to adjust the 3 trig equations for the LineOfSightVector to take into account the change in the "up" vector so the camera controls go in the right direction. Can someone please advise me on this.
Thanks
Having
lineOfSight = vec3(sin(phi)*cos(ksi), sin(ksi), cos(phi)*cos(ksi));
you could compute right and up directions as follows:
right = vec3(cos(phi)*cos(ksi), 0, -sin(phi)*cos(ksi));
up = cross(lineOfSight, right);
up = normalize(up);
Notice that in such model cases of cos(ksi) == 0 should be handled separately.