I am trying to the get the x and y coordinates inside a transformed sprite. I have a simple 200x200 sprite which rotates in the middle of the screen - with an origin of (0,0) to keep things simple.
I have written a piece of code that can transform the mouse coordinates but only with a specified x OR y value.
int ox = (int)(MousePos.X - Position.X);
int oy = (int)(MousePos.Y - Position.Y);
Relative.X = (float)((ox - (Math.Sin(Rotation) * Y /* problem here */)) / Math.Cos(Rotation));
Relative.Y = (float)((oy + (Math.Sin(Rotation) * X /* problem here */)) / Math.Cos(Rotation));
How can I achieve this? Or how can I fix my equation?
The most general way is to express the transformation as a matrix. This way, you can add any other transformation later, if you find you need it.
For the given transformation, the matrix is:
var mat = Matrix.CreateRotationZ(Rotation) * Matrix.CreateTranslation(Position);
This matrix can be interpreted as the system transformation from sprite space to world space. You want the inverse transformation - the system transformation from world space to sprite space.
var inv = Matrix.Invert(mat);
You can transform the mouse coordinates with this matrix:
var mouseInSpriteSpace = Vector2.Transform(MousePos, inv);
And you get the mouse position in the sprite's local system.
You can check if you have the correct matrix mat by using the overload of Spritebatch.Begin() that takes a matrix. If you pass the matrix, draw the sprite at (0, 0) with no rotation.
Related
I need a little help with maths for drawing lines between 2 points on a sphere. I have a 3d globe and some markers on it. I need to draw curved line from point 1 to point 2. I managed to draw lines from point to point with LineRenderer, but they are drawn with the wrong angle and I can't figure out, how to implement lines that go at the right angle. The code by far:
public static void DrawLine(Transform From, Transform To){
float count = 12f;
LineRenderer linerenderer;
GameObject line = new GameObject("Line");
linerenderer = line.AddComponent<LineRenderer>();
var points = new List<Vector3>();
Vector3 center = new Vector3(
(From.transform.position.x + To.transform.position.x) / 2f,
(From.transform.position.y + To.transform.position.y) ,
(From.transform.position.z + To.transform.position.z) / 2f
);
for (float ratio = 0; ratio <= 1; ratio += 1 / count)
{
var tangent1 = Vector3.Lerp(From.position, center, ratio);
var tangent2 = Vector3.Lerp(center, To.position, ratio);
var curve = Vector3.Lerp(tangent1, tangent2, ratio);
points.Add(curve);
}
linerenderer.positionCount = points.Count;
linerenderer.SetPositions(points.ToArray());
}
So what I have now is creepy lines rising above along y axis:
What should I take into account to let lines go along the sphere?
I suggest you to find the normal vector of your two points with a cross product (if your sphere is centered at the origin) and then normalize it to use it as a rotation axis for a rotation using quaternions. To make the interpolations, you can simply rotate the first point around this vector with an angle of k * a where k is a parameter from 0 to 1 and a is the angle between your first two vectors which you can find with the acos() of the dot product of your two normalized points
EDIT : I thought about a much easier solution (again, if the sphere is centered) : you can do a lerp between your two vectors and then normalize the result and multiply it by the radius of the sphere. However, the spacings between the resulting points wont be constant, especially if they are far from each other.
EDIT 2 : you can fix the problem of the second solution by using a function instead of a linear parameter for the lerp : f(t) = sin(t*a)/sin((PI+a*(1-2*t))/2)/dist(point1, point2) where a is the angle between the two points.
For a project in Unity3D I'm trying to transform all objects in the world by changing frames. What this means is that the origin of the new frame is rotated, translated, and scaled to match the origin of the old frame, then this operation is applied to all other objects (including the old origin).
For this, I need a generalized, 3-dimensional (thus 4x4) Transformation-Matrix.
I have looked at using Unity's built-in Matrix4x4.TRS()-method, but this seems useless, as it only applies the Translation, Rotation & Scale to a defined point.
What I'm looking for, is a change of frames, in which the new frame has a different origin, rotation, AND scale, with regards to the original one.
To visualize the problem, I've made a small GIF (I currently have a working version in 3D, without using a Matrix, and without any rotation):
https://gyazo.com/8a7ab04dfef2c96f53015084eefbdb01
The values for each sphere:
Origin1 (Red Sphere)
Before:
Position (-10, 0, 0)
Rotation (0,0,0)
Scale (4,4,4)
After:
Position (10, 0, 0)
Rotation (0,0,0)
Scale (8,8,8)
-
Origin2 (Blue Sphere)
Before:
Position (-20, 0, 0)
Rotation (0,0,0)
Scale (2,2,2)
After:
Position (-10, 0, 0)
Rotation (0,0,0)
Scale (4,4,4)
-
World-Object (White Sphere)
Before:
Position (0, 0, 10)
Rotation (0,0,0)
Scale (2,2,2)
After:
Position (30, 0, 20)
Rotation (0,0,0)
Scale (4,4,4)
Currently I'm simply taking the Vector between the 2 origins, scaling that to the difference between the two origins, then applying that on top of the new position of the original (first) origin.
This will of course not work when rotation is applied to any of the 2 origins.
// Position in original axes
Vector3 positionBefore = testPosition.TestPosition - origin.TestPosition;
// Position in new axes
Vector3 positionAfter = (positionBefore * scaleFactor) + origin.transform.position;
What I'm looking for is a Matrix that can do this (and include rotation, such that Origin2 is rotated to the rotation Origin1 was in before the transformation, and all other objects are moved to their correct positions).
Is there a way to do this without doing the full calculation on every Vector (i.e. transforming the positionbefore-Vector)? It needs to be applied to a (very) large number of objects every frame, thus it needs to be (fairly) optimized.
Edit: Scaling will ALWAYS be uniform.
There might be other solutions but here is what I would do
Wrap your objects into the following hierarchy
WorldAnchorObject
|- Red Sphere
|- Blue Sphere
|- White Sphere
Make sure the WorldAnchorObject has
position: 0,0,0
rotation: 0,0,0
localScale: 1,1,1
position/rotate/scale the Spheres (this will now happen relative to WorldAnchorObject)
Now all that is left is to transform the WorldAnchorObject -> it will move, scale and rotate anything else and keeps the relative transforms intact.
How exactly you move the world anchor is your thing. I guess you want to allways center and normalize a certain child object. Maybe something like
public void CenterOnObject(GameObject targetCenter)
{
var targetTransform = targetCenter.transform;
// The localPosition and localRotation are relative to the parent
var targetPos = transform.localPosition;
var targetRot = targetTransform.localRotation;
var targetScale = targetTransform.localScale;
// First reset everything
transform.position = Vector3.zero;
transform.rotation = Quaternion.Idendity;
transform.localScale = Vector3.one;
// set yourself to inverted target position
// After this action the target should be on 0,0,0
transform.position = targetPos * -1;
// Scale yourself relative to 0,0,0
var newScale = Vector3.one * 1/targetScale.x;
ScaleAround(gameObject, Vector3.zero, newScale);
// Now everything should be scaled so that the target
// has scale 1,1,1 and still is on position 0,0,0
// Now rotate around 0,0,0 so that the rotation of the target gets normalized
transform.rotation = Quaternion.Inverse(targetRotation);
}
// This scales an object around a certain pivot and
// takes care of the correct position translation as well
private void ScaleAround(GameObject target, Vector3 pivot, Vector3 newScale)
{
Vector3 A = target.transform.localPosition;
Vector3 B = pivot;
Vector3 C = A - B; // diff from object pivot to desired pivot/origin
float RS = newScale.x / target.transform.localScale.x; // relative scale factor
// calc final position post-scale
Vector3 FP = B + C * RS;
// finally, actually perform the scale/translation
target.transform.localScale = newScale;
target.transform.localPosition = FP;
}
Now you call it passing one of the children like e.g.
worldAnchorReference.CenterOnObject(RedSphere);
should result in what you wanted to achieve. (Hacking this in on my smartphone so no warranties but if there is trouble I can check it as soon as I'm with a PC again. ;))
Nevermind..
Had to apply the rotation & scale to the Translation before creating the TRS
D'Oh
I want to get the position of the sprite while its rotated,
Example
The bigger dot is the location i want to get (while rotated), and the sprite is rotated so its quite hard to get that, and im clueless of how to get it, the origin is (0,0) for anyone asking, is there any math that needs to be done to get this?
What you need to do is create a Vector2 from the origin to the rotated point before it's rotated and then rotate that vector.
For example, let's say we have a sprite with the origin point in the center and we want to know where the front point will be when the sprite is rotated.
First, create a vector from the origin to the point you want rotated on the unrotated sprite. In this case it's probably going to be something like 20 pixels to the right of the center:
var vector = new Vector2(20, 0);
Now we can rotate our vector with this simple function (borrowed from MonoGame.Extended)
public static Vector2 Rotate(Vector2 value, float radians)
{
var cos = (float) Math.Cos(radians);
var sin = (float) Math.Sin(radians);
return new Vector2(value.X*cos - value.Y*sin, value.X*sin + value.Y*cos);
}.
Now we can get our rotated vector like so:
var radians = MathHelper.ToRadians(degrees: -33);
var rotatedVector = Rotate(vector, radians);
To get this vector back into "sprite" space we can add it back to our origin point:
var point = origin + rotatedVector;
Or alternately if you want it in "world" space you can also add it to your sprites position.
var worldPoint = position + origin + rotatedVector;
Happy coding!
I have a point in space represented by a 4x4 matrix. I'd like to get the screen coordinates for the point. Picking appears to be the exact opposite fo what I need. I'm using the screen coordinate to determine where to draw text.
Currently the text I draw is floating in space far in front of the points. I've attached a screenshot of zoomed-in and zoomed-out to better explain. As you can see in the screenshot, the distance between each point is the same when zoomed in, when it should be smaller.
Am I missing a transformation? World coordinates consider 0,0,0 to be the center of the grid. I'm using SlimDX.
var viewProj = mMainCamera.View * mMainCamera.Projection;
//Convert 4x4 matrix for point to Vector4
var originalXyz = Vector3.Transform(Vector3.Zero, matrix);
//Vector4 to Vector3
Vector3 worldSpaceCoordinates = new Vector3(originalXyz.X, originalXyz.Y, originalXyz.Z);
//Transform point by view projection matrix
var transformedCoords = Vector3.Transform(worldSpaceCoordinates, viewProj);
Vector3 clipSpaceCoordinates = new Vector3(transformedCoords.X, transformedCoords.Y, transformedCoords.Z);
Vector2 pixelPosition = new Vector2((float)(0.5 * (clipSpaceCoordinates.X + 1) * ActualWidth), (float)(0.5 * (clipSpaceCoordinates.Y + 1) * ActualHeight));
Turns out I was way overthinking this. Just project the point to the screen by passing Vector3.Project your viewport information. It's a 3 line solution.
var viewProj = mMainCamera.View * mMainCamera.Projection;
var vp = mDevice.ImmediateContext.Rasterizer.GetViewports()[0];
var screenCoords = Vector3.Project(worldSpaceCoordinates, vp.X, vp.Y, vp.Width, vp.Height, vp.MinZ, vp.MaxZ, viewProj);
My XNA game uses Farseer Physics, which is a 2d physics engine with an optional renderer for physics engine data, to help you debug. Visual debug data is very useful, so I have it setup to be drawn according to my camera's state. This works perfectly, except for z axis rotation. See, I have a camera class that supports movement, zoom, and z axis rotation. My debug class uses the Farseer's debug renderer to create matrices that make the debug data be drawn according to the camera, and it does it well, except for one thing.. the z axis rotation uses the top-left corner of the screen for (0, 0), while my camera rotates using the center of the viewport as (0, 0). Does anyone have any tips for me? If I can make the debug drawer rotate from the center, it would work perfectly with my camera.
public void Draw(Camera2D camera, GraphicsDevice graphicsDevice)
{
// Projection (location and zoom)
float width = (1f / camera.Zoom) * ConvertUnits.ToSimUnits(graphicsDevice.Viewport.Width / 2);
float height = (-1f / camera.Zoom) * ConvertUnits.ToSimUnits(graphicsDevice.Viewport.Height / 2);
//projection = Matrix.CreateOrthographic(width, height, 1f, 1000000f);
projection = Matrix.CreateOrthographicOffCenter(
-width,
width,
-height,
height,
0f, 1000000f);
// View (translation and rotation)
float xTranslation = -1 * ConvertUnits.ToSimUnits(camera.Position.X);
float yTranslation = -1 * ConvertUnits.ToSimUnits(camera.Position.Y);
Vector3 translationVector = new Vector3(xTranslation, yTranslation, 0f);
view = Matrix.CreateRotationZ(camera.Rotation) * Matrix.Identity;
view.Translation = translationVector;
DebugViewXNA.RenderDebugData(ref projection, ref view);
}
One common approach to solving these sort of issues is to move the object in question to the 'centre', rotate and the move it back.
So in this case, I'd suggest applying a transformation that moves the camera "up and across" by half the screen dimensions, apply the rotation and then move it back.
In general, in order to perform rotation around point (x, y, z), the operation needs to be broken down into 3 conceptual parts:
T is a translation matrix that translates by (-x, -y, -z)
R is a rotation matrix that rotates around the relevant axis.
T^-1 is the matrix that translates back to (x, y, z)
The matrix you're after is the result of the multiplication of these 3, in reverse order:
M = T^-1 * R ^ T
The x,y,z you should use are your camera's position.