Projections of points from 3D to 2D using Hololens 2 - c#

I used PhotoCapture to obtain an object of type PhotoCaptureFrame from which I was able to extract the extrinsic matrix. Everything works when I hold the Hololens completely still, but if I try to rotate or translate it I can't get good results. Has anyone ever projected points from 3D to 2D with Hololens 2?
I'm using Unity version 2022.1.17f1 and the code I used to get the extrinsic matrix and the intrinsic matrix is as follows:
void OnCapturedPhotoToMemory(PhotoCapture.PhotoCaptureResult result, PhotoCaptureFrame photoCaptureFrame)
{
if (result.success)
{
Debug.Log("Saved Photo to disk!");
if (photoCaptureFrame.TryGetProjectionMatrix(out Matrix4x4 projectionMatrix))
{
StreamWriter sw = new StreamWriter(
Application.persistentDataPath +
string.Format("/ProjectionMatrix{0}.txt", count)
);
sw.WriteLine(projectionMatrix.ToString());
sw.Close();
}
else
{
Debug.Log("Failed to save camera matrix");
}
if (photoCaptureFrame.TryGetCameraToWorldMatrix(out Matrix4x4 cameraMatrix))
{
StreamWriter sw = new StreamWriter(
Application.persistentDataPath +
string.Format("/WorldMatrix{0}.txt", count)
);
sw.WriteLine(cameraMatrix.inverse.ToString());
sw.Close();
}
else
{
Debug.Log("Failed to save world matrix");
}
StreamWriter s = new StreamWriter(
Application.persistentDataPath +
string.Format("/worldToCameraMatrix{0}.txt", count++)
);
s.WriteLine(cam.worldToCameraMatrix.inverse.ToString());
s.Close();
photoCaptureObject.StopPhotoModeAsync(OnStoppedPhotoMode);
}
else
{
Debug.Log("Failed to save Photo to disk");
}
}
An instance of image is the following one:
Example 1
The red dots were created with Unity and the world coordinates of these dots were saved in appropriate CSV files.
The main goal is to use the intrinsic matrix and the extrinsic matrix to project the points from 3D to 2D and in order to do this I used this code in Python:
# readMatrix() is a function used to read the Matrix4x4 obtained by PhotoCaptureFrame in Unity
extrinsic_matrix = readMatrix(f"{WorldMatrices[image_index]}")
# I extract the rotation matrix
rotation_matrix = np.array([row[0:-1] for row in extrinsic_matrix[0:-1]]).copy()
rotation_matrix_ = rotation_matrix.copy()
###########################################################
print("Unity Matrix4x4: ")
print(rotation_matrix_)
# I change the coordinate system axes from OpenGL to OpenCV
rotation_matrix_[1][0] *= -1
rotation_matrix_[1][2] *= -1
rotation_matrix_[1][2] *= -1
rotation_matrix_[2][0] *= -1
rotation_matrix_[2][3] *= -1
rotation_matrix_[2][2] *= -1
###########################################################
print("Rotation Matrix: ")
print(rotation_matrix_)
# I extract the translation vector
translation_vector = np.array([row[-1] for row in extrinsic_matrix[0:-1]]).copy()
translation_vector[1] *= -1
translation_vector[2] *= -1
###########################################################
print("Translation Vector:")
print(translation_vector)
# I read the 3D coordinates of the 3D red points and I use the cv2.projectPoints
for key in vertices.keys():
points, _ = cv2.projectPoints(
np.float32(vertices[key]),
rotation_matrix_,
translation_vector,
camera_matrix_,
None)
for point in points:
x, y = (point[0][0], point[0][4])
x = int(x * width/2 + width/2)
y = int(y * height/2 + height/2)
cv2.circle(image, (x, y), radius=20, color=(255, 0, 0), thickness=-1)
###########################################################
plt.imshow(image[...,::-1])
plt.show()
The elements of the extrinsic matrix and the translation vector that have been multiplied by -1 are needed to go from a coordinate system defined on OpenGL to a coordinate system based on OpenCV.
An instance of output is the following one:
Example 2
I expected the blue dots to coincide with the red dots, but that doesn't happen. This is what I meant by "can't get good results".

Related

Moving in a grid with certain number of steps - Unity GameDev

I built a 3D Chess game which works flawlessly. But I would like to make some changes to the movement.
The piece is supposed to walk a number of tiles. For example with a range of 3 it can either move 3 to one direction (left for example), or 2 left 1 up/down, or 1 left 2 up/down.
Which minor change do I have to implement in my code for it to work?
private Vector2Int[] directions = new Vector2Int[]
{
Vector2Int.left,
Vector2Int.up,
Vector2Int.right,
Vector2Int.down,
new Vector2Int(1, 1),
new Vector2Int(1, -1),
new Vector2Int(-1, 1),
new Vector2Int(-1,- 1),
};
public override List<Vector2Int> SelectAvaliableSquares()
{
avaliableMoves.Clear();
float range = Board.BOARD_SIZE;
foreach (var direction in directions)
{
for (int i = 1; i <= range; i++)
{
Vector2Int nextCoords = occupiedSquare + direction * i;
Piece piece = board.GetPieceOnSquare(nextCoords);
if (!board.CheckIfCoordinatesAreOnBoard(nextCoords))
break;
if (piece == null)
TryToAddMove(nextCoords);
else if (!piece.IsFromSameTeam(this))
{
TryToAddMove(nextCoords);
break;
}
else if (piece.IsFromSameTeam(this))
break;
}
}
return avaliableMoves;
}
The given code is an example of the movement of an ordinary queen.
Added a picture to demonstrate the movement.
enter image description here
Find the center of each chess tile and hold them all in a 2D array. Calculate which tiles the unit must traverse in accordance with its type and location. Then when you move, have it move first along the X axis and then the Y axis or vice versa. If it moves diagonally just move along both axes at the same time.

How to detect when a 3D object is contained within a 2D boundary defined by 4 co-ordinates?

When a 3D object lies within the bounds of a 2D polygon(or a 3D polygon that extends infinitely on the Y-coordinate), a message should be triggered.
There are a number of 2D polygons that form a loop, with 2 co-ordinates of each polygon being shared with the previous polygon in the sequence.
I have tried using Rect() but am having trouble defining each rectangle with 4 co-ordinates available rather than width and height:
3D object to be detected = this
Each 2D rectangle defined as (A = previousA, B = previousB, C = currentA, D = currentB)
void Update ()
{
Vector3 previousA = new Vector3(0, 0, 0);
Vector3 previousB = new Vector3(0, 0, 0);
Vector3 currentA;
Vector3 currentB;
Rect rect1;
foreach (GameObject item in CheckpointManager.GetComponent<Checkpoints>().checklines)
{
currentA = item.transform.GetChild(0).transform.position;
currentB = item.transform.GetChild(1).transform.position;
if (previousA == new Vector3(0, 0, 0) || previousB == new Vector3(0, 0, 0))
{
}
else
{
rect1 = new Rect(
previousA.x,
previousA.y,
Mathf.Sqrt( (previousA.x - currentB.x) * (previousA.x - currentB.x) ),
Mathf.Sqrt( (previousA.y - previousB.y) * (previousA.y - previousB.y))
);
if (rect1.Contains(this.transform.position))
{
Debug.Log("intersection confirmed in ploygon: " + rect1);
}
}
previousA = currentA;
previousB = currentB;
}
}
Edit 1:
The sample of code above is attactched to the 3D white cube object.
Here is a picture better demonstrating what I mean:
Here the camera is high on the Y-axis, with the Z-axis going up and down (north/south), and the X-axis is going left and right (east/west).
Each of the red and green icons mark a single point. Two adjacent red icons and two adjacent green icons mark the four points of a 2D polygon (connected by Unity gizmo lines). The final pair of points created will be connected to the first pair of points that were created. I have created a tool in the editor that can create pairs of points, so that there will always be a loop of pairs of points, and therefore always a loop of 2D polygons with four points.
The 3D White Cube needs to be aware of when it is currently contained within any of the 2D polygons, no matter how many there are on the screen, and no matter their rotation. It also needs to be aware of which polygon it is currently residing in, if any.
However, the 3D cube can be anywhere along the Y-axis (close to the camera or potentially infinitely far away).
if you define your polygons in a specific winding rule and your QUADS are always convex than you can do this:
Now ignore the y value of your 3D mesh and use y=0 instead for all points of your object. If point is inside than direction to all 2D polygon veticies match the winding rule of your 2D polygon. If at least one does not match than your point is outside. So check all points of your object and if all in then object is fully contained.
How to detect winding for point P ?
in your case simply by cross product of P(i+1)-P(i) and P-P(i). That will give you normal directed either up or down (in y axis) so just check y coordinate of the results if it is positive or negative.
N0 = cross( P1-P0, P-P0 )
N1 = cross( P2-P1, P-P1 )
N2 = cross( P3-P2, P-P2 )
N3 = cross( P0-P3, P-P3 )
if ((sign(N0.y)==sign(N1.y))
&&(sign(N2.y)==sign(N3.y))
&&(sign(N0.y)==sign(N2.y))) point_is_inside;
else point_is_outside;
So check all points of your object and if all lies inside object is fully inside. If just some object is intersecting ... The sign it self depends on winding and coordinate system properties.
Here (on the bottom) is how to compute the 3D cross product:
Understanding 4x4 homogenous transform matrices
This approach works for any number of vertexes per your 2D polygon just must be convex with specific winding rule. If your 2D polygon is not convex than you need to divide it to convex triangles or use hit test instead.

AABB vs Circle collision in custom physics engine

I have followed this tutorial: https://gamedevelopment.tutsplus.com/tutorials/how-to-create-a-custom-2d-physics-engine-the-basics-and-impulse-resolution--gamedev-6331 to create a 2d physics engine in c# (he works in almost all the time wrong and inconsistent pseudo c++) I've got Circle vs Circle collision and AABB vs AABB collision working fine. But when trying the AABB vs Circle collison (below) the two rigidbodies just stick together and slowly move glitchy in one direction.
I would be super thankful if someone could help me with this as I have spent days and still don't know what's causing the error.
If someone needs more information from my code, I'd be happy to provide it.
public static bool AABBvsCircle(ref Collision result) {
RigidBody AABB = result.a.Shape is AABB ? result.a : result.b;
RigidBody CIRCLE = result.b.Shape is Circle ? result.b : result.a;
Vector2 n = CIRCLE.Position - AABB.Position;
Vector2 closest = n;
float x_extent = ((AABB)AABB.Shape).HalfWidth;
float y_extent = ((AABB)AABB.Shape).HalfHeight;
closest.X = Clamp(-x_extent, x_extent, closest.X);
closest.Y = Clamp(-y_extent, y_extent, closest.Y);
bool inside = false;
if (n == closest) {
inside = true;
if (Abs(n.X) > Abs(n.Y)) {
// Clamp to closest extent
if (closest.X > 0)
closest.X = x_extent;
else
closest.X = -x_extent;
}
// y axis is shorter
else {
// Clamp to closest extent
if (closest.Y > 0)
closest.Y = y_extent;
else
closest.Y = -y_extent;
}
}
Vector2 normal = n - closest;
float d = normal.LengthSquared();
float r = ((Circle)CIRCLE.Shape).Radius;
// Early out of the radius is shorter than distance to closest point and
// Circle not inside the AABB
if (d > (r * r) && !inside)
return false;
// Avoided sqrt until we needed
d = (float)Sqrt(d);
if (inside) {
result.normal = -normal / d;
result.penetration = r - d;
}
else {
result.normal = normal / d;
result.penetration = r - d;
}
return true;
}
edit 1 collison resolution method in "Collision" struct
public void Resolve() {
Vector2 rv = b.Velocity - a.Velocity;
float velAlongNormal = Vector2.Dot(rv, normal);
if (velAlongNormal > 0)
return;
float e = Min(a.Restitution, b.Restitution);
float j = -(1 + e) * velAlongNormal;
j /= a.InvertedMass + b.InvertedMass;
Vector2 impulse = j * normal;
a.Velocity -= a.InvertedMass * impulse;
b.Velocity += b.InvertedMass * impulse;
const float percent = 0.2f; // usually 20% to 80%
const float slop = 0.01f; // usually 0.01 to 0.1
Vector2 correction = Max(penetration - slop, 0.0f) / (a.InvertedMass + b.InvertedMass) * percent * normal;
if (float.IsNaN(correction.X) || float.IsNaN(correction.Y))
correction = Vector2.Zero;
a.Position -= a.InvertedMass * correction;
b.Position += b.InvertedMass * correction;
}
Before doing any detailed examining of the code logic, I spotted this potential mistake:
result.normal = -normal / d;
Since d was set to normal.LengthSquared and not normal.Length as it should be, the applied position correction could either be (much) smaller or (much) bigger than intended. Given that your objects are "sticking together", it is likely to be the former, i.e. d > 1.
(The fix is of course simply result.normal = -normal / Math.Sqrt(d);)
Note that the above may not be the only source of error; let me know if there is still undesirable behavior.
Although your tag specifies C#; here are basic AABB to AABB & AABB to Circle collisions that are done in C++ as these are take from: LernOpenGL:InPractice:2DGame : Collision Detection
AABB - AABB Collsion
// AABB to AABB Collision
GLboolean CheckCollision(GameObject &one, GameObject &two) {
// Collision x-axis?
bool collisionX = one.Position.x + one.Size.x >= two.Position.x &&
two.Position.x + two.Size.x >= one.Position.x;
// Collision y-axis?
bool collisionY = one.Position.y + one.Size.y >= two.Position.y &&
two.Position.y + two.Size.y >= one.Position.y;
// Collision only if on both axes
return collisionX && collisionY;
}
AABB To Circle Collision Without Resolution
// AABB to Circle Collision without Resolution
GLboolean CheckCollision(BallObject &one, GameObject &two) {
// Get center point circle first
glm::vec2 center(one.Position + one.Radius);
// Calculate AABB info (center, half-extents)
glm::vec2 aabb_half_extents(two.Size.x / 2, two.Size.y / 2);
glm::vec2 aabb_center(
two.Position.x + aabb_half_extents.x,
two.Position.y + aabb_half_extents.y
);
// Get difference vector between both centers
glm::vec2 difference = center - aabb_center;
glm::vec2 clamped = glm::clamp(difference, -aabb_half_extents, aabb_half_extents);
// Add clamped value to AABB_center and we get the value of box closest to circle
glm::vec2 closest = aabb_center + clamped;
// Retrieve vector between center circle and closest point AABB and check if length <= radius
difference = closest - center;
return glm::length(difference) < one.Radius;
}
Then in the next section of his online tutorial he shows how to do Collision Resolution using the above method found here: LearnOpenGL : Collision Resolution
In this section he adds an enumeration, another function and an std::tuple<> to refine the above detection system while trying to keep the code easier & cleaner to manage and read.
enum Direction {
UP,
RIGHT,
DOWN,
LEFT
};
Direction VectorDirection(glm::vec2 target)
{
glm::vec2 compass[] = {
glm::vec2(0.0f, 1.0f), // up
glm::vec2(1.0f, 0.0f), // right
glm::vec2(0.0f, -1.0f), // down
glm::vec2(-1.0f, 0.0f) // left
};
GLfloat max = 0.0f;
GLuint best_match = -1;
for (GLuint i = 0; i < 4; i++)
{
GLfloat dot_product = glm::dot(glm::normalize(target), compass[i]);
if (dot_product > max)
{
max = dot_product;
best_match = i;
}
}
return (Direction)best_match;
}
typedef std::tuple<GLboolean, Direction, glm::vec2> Collision;
However there is a slight change to the original CheckCollsion() function for AABB to Circle by changing its declaration/definition to return a Collision instead of a GLboolean.
AABB - Circle Collision With Collision Resolution
// AABB - Circle Collision with Collision Resolution
Collision CheckCollision(BallObject &one, GameObject &two) {
// Get center point circle first
glm::vec2 center(one.Position + one.Radius);
// Calculate AABB info (center, half-extents)
glm::vec2 aabb_half_extents(two.Size.x / 2, two.Size.y / 2);
glm::vec2 aabb_center(two.Position.x + aabb_half_extents.x, two.Position.y + aabb_half_extents.y);
// Get difference vector between both centers
glm::vec2 difference = center - aabb_center;
glm::vec2 clamped = glm::clamp(difference, -aabb_half_extents, aabb_half_extents);
// Now that we know the the clamped values, add this to AABB_center and we get the value of box closest to circle
glm::vec2 closest = aabb_center + clamped;
// Now retrieve vector between center circle and closest point AABB and check if length < radius
difference = closest - center;
if (glm::length(difference) < one.Radius) // not <= since in that case a collision also occurs when object one exactly touches object two, which they are at the end of each collision resolution stage.
return std::make_tuple(GL_TRUE, VectorDirection(difference), difference);
else
return std::make_tuple(GL_FALSE, UP, glm::vec2(0, 0));
}
Where the above functions or methods are called within this function that does the actually logic if a collision is detected.
void Game::DoCollisions()
{
for (GameObject &box : this->Levels[this->Level].Bricks)
{
if (!box.Destroyed)
{
Collision collision = CheckCollision(*Ball, box);
if (std::get<0>(collision)) // If collision is true
{
// Destroy block if not solid
if (!box.IsSolid)
box.Destroyed = GL_TRUE;
// Collision resolution
Direction dir = std::get<1>(collision);
glm::vec2 diff_vector = std::get<2>(collision);
if (dir == LEFT || dir == RIGHT) // Horizontal collision
{
Ball->Velocity.x = -Ball->Velocity.x; // Reverse horizontal velocity
// Relocate
GLfloat penetration = Ball->Radius - std::abs(diff_vector.x);
if (dir == LEFT)
Ball->Position.x += penetration; // Move ball to right
else
Ball->Position.x -= penetration; // Move ball to left;
}
else // Vertical collision
{
Ball->Velocity.y = -Ball->Velocity.y; // Reverse vertical velocity
// Relocate
GLfloat penetration = Ball->Radius - std::abs(diff_vector.y);
if (dir == UP)
Ball->Position.y -= penetration; // Move ball bback up
else
Ball->Position.y += penetration; // Move ball back down
}
}
}
}
// Also check collisions for player pad (unless stuck)
Collision result = CheckCollision(*Ball, *Player);
if (!Ball->Stuck && std::get<0>(result))
{
// Check where it hit the board, and change velocity based on where it hit the board
GLfloat centerBoard = Player->Position.x + Player->Size.x / 2;
GLfloat distance = (Ball->Position.x + Ball->Radius) - centerBoard;
GLfloat percentage = distance / (Player->Size.x / 2);
// Then move accordingly
GLfloat strength = 2.0f;
glm::vec2 oldVelocity = Ball->Velocity;
Ball->Velocity.x = INITIAL_BALL_VELOCITY.x * percentage * strength;
//Ball->Velocity.y = -Ball->Velocity.y;
Ball->Velocity = glm::normalize(Ball->Velocity) * glm::length(oldVelocity); // Keep speed consistent over both axes (multiply by length of old velocity, so total strength is not changed)
// Fix sticky paddle
Ball->Velocity.y = -1 * abs(Ball->Velocity.y);
}
}
Now some of the code above is GameSpecific as in the Game class, Ball class, Player etc. where these are considered and inherited from a GameObject, but the algorithm itself should provide useful as this is exactly what you are looking for but from a different language. Now as to your actually problem it appears you are using more than basic motion as it appears you are using some form of kinetics that can be seen from your Resolve() method.
The overall Pseudo Algorithm for doing AABB to Circle Collision with Resolution would be as follows:
Do Collisions:
Check For Collision: Ball With Box
Get Center Point Of Circle First
Calculate AABB Info (Center & Half-Extents)
Get Difference Vector Between Both Centers
Clamp That Difference Between The [-Half-Extents, Half-Extents]
Add The Clamped Value To The AABB-Center To Give The Point Of Box Closest To The Circle
Retrieve & Return The Vector Between Center Circle & Closest Point AABB & Check If Length Is < Radius (In this case a Collision).
If True Return tuple(GL_TRUE, VectorDirection(difference), difference))
See Function Above For VectorDirection Implementation.
Else Return tuple(GL_FALSE, UP, glm::vec2(0,0))
Perform Collision Resolution (Test If Collision Is True)
Extract Direction & Difference Vector
Test Direction For Horizontal Collision
If True Reverse Horizontal Velocity
Get Penetration Amount (Ball Radius - abs(diff_vector.x))
Test If Direction Is Left Or Right (W,E)
If Left - Move Ball To Right (ball.position.x += penetration)
Else Right - Move Ball To Left (ball.position.x -= penetration)
Else Test Direction For Vertical Collision
If True Reverse Vertical Velocity
Get Penetration Amount (Ball Radius - abs(diff_vector.y))
Test If Direction Is Up Or Down (N,S)
If Up - Move Ball Up (ball.position.y -= penetration)
Else Down - Move Ball Down (ball.position.y += penetration)
Now the above algorithm shown assumes that the boxes are not rotated and that their top & bottom edges are parallel with the horizontal and that their sides are parallel with the left and right edges of the window-screen coordinates. Also in the bottom section with the vertical displacement this also assumes that the top left corner of the screen - the first pixel is (0,0), thus the opposite operation for vertical displacement. This also assumes 2D collisions and not 3D Ridged or Ragdoll type collisions. You can use this to compare against your own source - implementation, but as far as just looking at your code without running it through a debugger it is extremely hard for me to see or find out what is actually causing your bug. I hope this provides you with the help that you need.
The above code from the mentioned OpenGL tutorial website does work as I have tested it myself. This algorithm is of the simplest of collision detections and is by far no means a comprehensive system and it still has caveats or pitfalls not mentioned here, but does suffice for the application it was used in. If you need more information about Collision Detections there is a few chapters that can be read in Ian Millington's book Game Physics Engine Development Although his book is based on a generalized 3D Physics Engine and only briefly discuses Collision Detection as their are full Books dedicated to the growing popularity of such complex beasts.

Unity Shaderlab- transform from screen space to world space

I'm making a post-processing shader (in unity) that requires world-space coordinates. I have access to the depth information of a certain pixel, as well as the onscreen location of that pixel. How can I find the world position that that pixel corresponds to, much like the function ViewportToWorldPos()?
It's been three years! I was working on this recently, and an older engineer help me solved the problem. Here is the code.
We need to firstly give a camera transform matrix to the shader in script:
void OnRenderImage(RenderTexture src, RenderTexture dst)
{
Camera curruntCamera = Camera.main;
Matrix4x4 matrixCameraToWorld = currentCamera.cameraToWorldMatrix;
Matrix4x4 matrixProjectionInverse = GL.GetGPUProjectionMatrix(currentCamera.projectionMatrix, false).inverse;
Matrix4x4 matrixHClipToWorld = matrixCameraToWorld * matrixProjectionInverse;
Shader.SetGlobalMatrix("_MatrixHClipToWorld", matrixHClipToWorld);
Graphics.Blit(src, dst, _material);
}
Then we need the depth information to transform the clip pos. Like this:
inline half3 TransformUVToWorldPos(half2 uv)
{
half depth = tex2D(_CameraDepthTexture, uv).r;
#ifndef SHADER_API_GLCORE
half4 positionCS = half4(uv * 2 - 1, depth, 1) * LinearEyeDepth(depth);
#else
half4 positionCS = half4(uv * 2 - 1, depth * 2 - 1, 1) * LinearEyeDepth(depth);
#endif
return mul(_MatrixHClipToWorld, positionCS).xyz;
}
That's all.
have a look at this tutorial: http://flafla2.github.io/2016/10/01/raymarching.html
essentially:
store one vector per corner of the screen, passed as constants, that goes from the camera position to said corner.
interpolate the vectors based on the screen space position, or the uvs of your screenspace quad
compute final position as cameraPosition + interpolatedVector * depth

How to get coords inside a transformed sprite?

I am trying to the get the x and y coordinates inside a transformed sprite. I have a simple 200x200 sprite which rotates in the middle of the screen - with an origin of (0,0) to keep things simple.
I have written a piece of code that can transform the mouse coordinates but only with a specified x OR y value.
int ox = (int)(MousePos.X - Position.X);
int oy = (int)(MousePos.Y - Position.Y);
Relative.X = (float)((ox - (Math.Sin(Rotation) * Y /* problem here */)) / Math.Cos(Rotation));
Relative.Y = (float)((oy + (Math.Sin(Rotation) * X /* problem here */)) / Math.Cos(Rotation));
How can I achieve this? Or how can I fix my equation?
The most general way is to express the transformation as a matrix. This way, you can add any other transformation later, if you find you need it.
For the given transformation, the matrix is:
var mat = Matrix.CreateRotationZ(Rotation) * Matrix.CreateTranslation(Position);
This matrix can be interpreted as the system transformation from sprite space to world space. You want the inverse transformation - the system transformation from world space to sprite space.
var inv = Matrix.Invert(mat);
You can transform the mouse coordinates with this matrix:
var mouseInSpriteSpace = Vector2.Transform(MousePos, inv);
And you get the mouse position in the sprite's local system.
You can check if you have the correct matrix mat by using the overload of Spritebatch.Begin() that takes a matrix. If you pass the matrix, draw the sprite at (0, 0) with no rotation.

Categories

Resources