Matrix transformations to recreate camera "Look At" functionality - c#

Summary:
I'm given a series of points in 3D space, and I want to analyze them from any viewing angle. I'm trying to figure out how to reproduce the "Look At" functionality of OpenGL in WPF. I want the mouse move X,Y to manipulate the Phi and Theta Spherical Coordinates (respectively) of the camera so that I as I move my mouse, the camera appears to orbit around the center of mass (generally the origin) of the point cloud, which will represent the target of the Look At
What I've done:
I have made the following code, but so far it isn't doing what I want:
internal static Matrix3D CalculateLookAt(Vector3D eye, Vector3D at = new Vector3D(), Vector3D up = new Vector3D())
{
if (Math.Abs(up.Length - 0.0) < double.Epsilon) up = new Vector3D(0, 1, 0);
var zaxis = (at - eye);
zaxis.Normalize();
var xaxis = Vector3D.CrossProduct(up, zaxis);
xaxis.Normalize();
var yaxis = Vector3D.CrossProduct(zaxis, xaxis);
return new Matrix3D(
xaxis.X, yaxis.X, zaxis.X, 0,
xaxis.Y, yaxis.Y, zaxis.Y, 0,
xaxis.Z, yaxis.Z, zaxis.Z, 0,
Vector3D.DotProduct(xaxis, -eye), Vector3D.DotProduct(yaxis, -eye), Vector3D.DotProduct(zaxis, -eye), 1
);
}
I got the algorithm from this link: http://msdn.microsoft.com/en-us/library/bb205342(VS.85).aspx
I then apply the returned matrix to all of the points using this:
var vector = new Vector3D(p.X, p.Y, p.Z);
var projection = Vector3D.Multiply(vector, _camera); // _camera is the LookAt Matrix
if (double.IsNaN(projection.X)) projection.X = 0;
if (double.IsNaN(projection.Y)) projection.Y = 0;
if (double.IsNaN(projection.Z)) projection.Z = 0;
return new Point(
(dispCanvas.ActualWidth * projection.X / 320),
(dispCanvas.ActualHeight * projection.Y / 240)
);
I am calculating the center of all the points as the at vector, and I've been setting my initial eye vector at (center.X,center.Y,center.Z + 100) which is plenty far away from all the points
I then take the mouse move and apply the following code to get the Spherical Coordinates and put that into the CalculateLookAt function:
var center = GetCenter(_points);
var pos = e.GetPosition(Canvas4); //e is of type MouseButtonEventArgs
var delta = _previousPoint - pos;
double r = 100;
double theta = delta.Y * Math.PI / 180;
double phi = delta.X * Math.PI / 180;
var x = r * Math.Sin(theta) * Math.Cos(phi);
var y = r * Math.Cos(theta);
var z = -r * Math.Sin(theta) * Math.Sin(phi);
_camera = MathHelper.CalculateLookAt(new Vector3D(center.X * x, center.Y * y, center.Z * z), new Vector3D(center.X, center.Y, center.Z));
UpdateCanvas(); // Redraws the points on the canvas using the new _camera values
Conclusion:
This does not make the camera orbit around the points. So either my understanding of how to use the Look At function is off, or my math is incorrect.
Any help would be very much appreciated.

Vector3D won't transform in affine space. The Vector3D won't translate because it is a vector, which doesn't exist in affine space (i.e. 3D vector space with a translation component), only in vector space. You need a Point3D:
var m = new Matrix3D(
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
10, 10, 10, 1);
var v = new Point3D(1, 1, 1);
var r = Point3D.Multiply(v, m); // 11,11,11
Note your presumed answer is also incorrect, as it should be 10 + 1 for each component, since your vector is [1,1,1].

Well, it turns out that the Matrix3D libraries have some interesting issues.
I noticed that Vector3D.Multiply(vector, matrix) would not translate the vector.
For example:
var matrixTest = new Matrix3D(
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
10, 10, 10, 1
);
var vectorTest = new Vector3D(1, 1, 1);
var result = Vector3D.Multiply(vectorTest, matrixTest);
// result = {1,1,1}, should be {11,11,11}
I ended up having to rewrite some of the basic matrix math functions in order for the code to work.
Everything was working except for the logic side, it was the basic math (handled by the Matrix3D library) that was the problem.
Here is the fix. Replace all Vector3D.Multiply method calls with this:
public static Vector3D Vector3DMultiply(Vector3D vector, Matrix3D matrix)
{
return new Vector3D(
vector.X * matrix.M11 + vector.Y * matrix.M12 + vector.Z * matrix.M13 + matrix.OffsetX,
vector.X * matrix.M21 + vector.Y * matrix.M22 + vector.Z * matrix.M23 + matrix.OffsetY,
vector.X * matrix.M31 + vector.Y * matrix.M32 + vector.Z * matrix.M33 + matrix.OffsetZ
);
}
And everything works!

Related

Meshes created from code don't have a position unity

So, I tried to create a grid so that I can instantiate objects on it. I check for the position of said hit object (one of the squares I created) and then set the instantiated object to that position. Problem is, the squares I created with code don't have a position and are all set to 0, 0, 0.
{
GameObject tileObject = new GameObject(string.Format("{0}, {1}", x, y));
tileObject.transform.parent = transform;
Mesh mesh = new Mesh();
tileObject.AddComponent<MeshFilter>().mesh = mesh;
tileObject.AddComponent<MeshRenderer>().material = tileMaterial;
Vector3[] vertices = new Vector3[4];
vertices[0] = new Vector3(x * tileSize, 0, y * tileSize);
vertices[1] = new Vector3(x * tileSize, 0, (y +1) * tileSize);
vertices[2] = new Vector3((x +1) * tileSize, 0, y * tileSize);
vertices[3] = new Vector3((x +1) * tileSize, 0, (y +1) * tileSize);
int[] tris = new int[] { 0, 1, 2, 1, 3, 2 };
mesh.vertices = vertices;
mesh.triangles = tris;
mesh.RecalculateNormals();
tileObject.layer = LayerMask.NameToLayer("Tile");
tileObject.AddComponent<BoxCollider>();
//var xPos = Mathf.Round(x);
//var yPos = Mathf.Round(y);
//tileObject.gameObject.transform.position = new Vector3(xPos , 0f, yPos);
return tileObject;
}```
As said your issue is that you leave all tiles on the position 0,0,0 and only set their vertices to the desired world space positions.
You would rather want to keep your vertices local like e.g.
// I would use the offset of -0.5f so the mesh is centered at the transform pivot
// Also no need to recreate the arrays everytime, you can simply reference the same ones
private readonly Vector3[] vertices = new Vector3[4]
{
new Vector3(-0.5f, 0, -0.5f);
new Vector3(-0.5f, 0, 0.5f);
new Vector3(0.5f, 0, -0.5f);
new Vector3(0.5f, 0, 0.5f);
};
private readonly int[] tris = new int[] { 0, 1, 2, 1, 3, 2 };
and then in your method do
GameObject tileObject = new GameObject($"{x},{y}");
tileObject.transform.parent = transform;
tileObject.localScale = new Vector3 (tileSize, 1, tileSize);
tileObject.localPosition = new Vector3(x * tileSize, 0, y * tileSize);
The latter depends of course on your needs. Actually I would prefer to have the tiles also centered around the grid object so something like e.g.
// The "-0.5f" is for centering the tile itself correctly
// The "-gridWith/2f" makes the entire grid centered around the parent
tileObject.localPosition = new Vector3((x - 0.5f - gridWidth/2f) * tileSize, 0, (y - 0.5f - gridHeight/2f) * tileSize);
In order to later find out which tile you are standing on (e.g. via raycasts, collisions, etc) I would then rather use a dedicated component and simply tell it it's coordinates like e.g.
// Note that Tile is a built-in type so you would want to avoid confusion
public class MyTile : MonoBehaviour
{
public Vector2Int GridPosition;
}
and then while generating your grid you would simply add
var tile = tileObject.AddComponent<MyTile>();
tile.GridPosition = new Vector2Int(x,y);
while you can still also access its transform.position to get the actual world space center of the tiles

SkiaSharp Calc new point coordinates after applying 3d rotation

I am using a matrix to translate then rotate in 3d (x, y, z) using the xRotate, yRotate, zRotate, depth == 300 vars.
using (var bmp = new SKBitmap(800, 600))
using (var canvas = new SKCanvas(bmp))
using (var paint = new SKPaint())
{
canvas.Clear(SKColors.White);
paint.IsAntialias = true;
// Find center of canvas
var info = bmp.Info;
float xCenter = info.Width / 2;
float yCenter = info.Height / 2;
// Translate center to origin
SKMatrix matrix = SKMatrix.MakeTranslation(-xCenter, -yCenter);
// Use 3D matrix for 3D rotations and perspective
SKMatrix44 matrix44 = SKMatrix44.CreateIdentity();
matrix44.PostConcat(SKMatrix44.CreateRotationDegrees(1, 0, 0, xRotate));
matrix44.PostConcat(SKMatrix44.CreateRotationDegrees(0, 1, 0, yRotate));
matrix44.PostConcat(SKMatrix44.CreateRotationDegrees(0, 0, 1, zRotate));
SKMatrix44 perspectiveMatrix = SKMatrix44.CreateIdentity();
perspectiveMatrix[3, 2] = -1 / depth;
matrix44.PostConcat(perspectiveMatrix);
// Concatenate with 2D matrix
SKMatrix.PostConcat(ref matrix, matrix44.Matrix);
// Translate back to center
SKMatrix.PostConcat(ref matrix,
SKMatrix.MakeTranslation(xCenter, yCenter));
// Set the matrix and display the bitmap
canvas.SetMatrix(matrix);
canvas.DrawBitmap(currentImage, 50, 25, paint);
pictureBox1.Image = bmp.ToBitmap();
}
If I have some Point in the original currentImage, I want to calculate its new location after drawing the transformed image. How can I do that? Would I reuse the matrix to calculate it?
Found the answer. Let the point be (1, 2) in the currentImage. Then simply:
var newPoint = matrix.MapPoint(1, 2);
newPoint =new SkPoint(50 + newPoint.X, 25 + newPoint.Y); // + offsets of DrawImage
Or to draw on a canvas that already mapped using canvas.SetMatrix
var newPoint = new SKPoint(1, 2);
canvas.DrawCircle(newPoint.X + 50, newPoint.Y + 25, 7, paint); // + offsets of DrawImage

The angle between two 3D vectors with a result range 0 - 360

I'm looking for a way to calculate the angle between three points considered as two vectors (see below):
using System.Windows.Media.Media3D;
public static float AngleBetweenThreePoints(Point3D[] points)
{
var v1 = points[1] - points[0];
var v2 = points[2] - points[1];
var cross = Vector3D.CrossProduct(v1, v2);
var dot = Vector3D.DotProduct(v1, v2);
var angle = Math.PI - Math.Atan2(cross.Length, dot);
return (float) angle;
}
If you give this the following points:
var points = new[]
{
new Point3D(90, 100, 300),
new Point3D(100, 200, 300),
new Point3D(100, 300, 300)
};
or the following:
var points = new[]
{
new Point3D(110, 100, 300),
new Point3D(100, 200, 300),
new Point3D(100, 300, 300)
};
You get the same result. I can see the cross product in the function returns (0, 0, 10000) in the first case, and (0, 0, -10000) in the second but this information gets lost with cross.Length which could never return a -ve result.
What I'm looking for is a result range 0 - 360 not limited to 0 - 180. How do I achieve that?
The answer is to provide a reference UP vector:
public static float AngleBetweenThreePoints(Point3D[] points, Vector3D up)
{
var v1 = points[1] - points[0];
var v2 = points[2] - points[1];
var cross = Vector3D.CrossProduct(v1, v2);
var dot = Vector3D.DotProduct(v1, v2);
var angle = Math.Atan2(cross.Length, dot);
var test = Vector3D.DotProduct(up, cross);
if (test < 0.0) angle = -angle;
return (float) angle;
}
This came from here: https://stackoverflow.com/a/5190354/181622
Are you looking for this ?
θ_radian = arccos ( (​P⋅Q) / ​(∣P∣∣Q∣)​ ​​) with vectors P and Q
θ_radian = θ_degree * π / 180
EDIT 0-360 range
angle = angle * 360 / (2*Math.PI);
if (angle < 0) angle = angle + 360;

Image rotation translation offset

I built a system that rotates and scales objects using the most left and vertical center point of the object as the origin. After transforming, you can send the objects html/css3 information to the server and c# will attempt to redraw the scene that you created. C# is rotating the objects/images at the same degree but is rotating them from the vertical and horizontal center points this causes the object/images to change dimensions. I already have those changes calculated however there is an offset occurring with the x,y coordinates of the top left of the object/images
This is the method I've been attempting to work out to deal with the offsets:
int[] rotResult = new int[2];
int[] tLCoord = new int[2];
int[] tRCoord = new int[2];
int[] bLCoord = new int[2];
int[] bRCoord = new int[2];
int[] tLCoordTmp = new int[2];
int[] tRCoordTmp = new int[2];
int[] bLCoordTmp = new int[2];
int[] bRCoordTmp = new int[2];
float sin = (float)Math.Sin(angle * Math.PI / 180.0);
float cos = (float)Math.Cos(angle * Math.PI / 180.0);
tLCoord[0] = (originalX - Math.Abs(xMin));
tLCoord[1] = (originalY - Math.Abs(yMin));
tRCoord[0] = origwidth;
tRCoord[1] = 0;
bLCoord[0] = 0;
bLCoord[1] = (origheight * -1);
bRCoord[0] = origwidth;
bRCoord[1] = (origheight * -1);
tLCoordTmp[0] = Convert.ToInt32((tLCoord[0] * cos) - (tLCoord[1] * sin));
tLCoordTmp[1] = Convert.ToInt32((tLCoord[1] * cos) + (tLCoord[0] * sin));
tRCoordTmp[0] = Convert.ToInt32(((tLCoordTmp[0] + tRCoord[0]) * cos) - (tRCoord[1] * sin));
tRCoordTmp[1] = Convert.ToInt32(((tLCoordTmp[1] + tRCoord[1]) * cos) + (tRCoord[0] * sin));
bLCoordTmp[0] = Convert.ToInt32(((tLCoordTmp[0] + bLCoord[0]) * cos) - (bLCoord[1] * sin));
bLCoordTmp[1] = Convert.ToInt32(((tLCoordTmp[1] + bLCoord[1]) * cos) + (bLCoord[0] * sin));
bRCoordTmp[0] = Convert.ToInt32(((tLCoordTmp[0] + bRCoord[0]) * cos) - (bRCoord[1] * sin));
bRCoordTmp[1] = Convert.ToInt32(((tLCoordTmp[1] + bRCoord[1]) * cos) + (bRCoord[0] * sin));
if (angle >= 270)
{
rotResult[0] = tLCoordTmp[0];
rotResult[1] = tRCoordTmp[1];
}
else if (angle <= 90)
{
rotResult[0] = bLCoordTmp[0];
rotResult[1] = tLCoordTmp[1];
}
else if (angle > 90 && angle <= 180)
{
rotResult[0] = bRCoordTmp[0];
rotResult[1] = bLCoordTmp[1];
}
else if (angle > 180 && angle < 270)
{
rotResult[0] = tRCoordTmp[0];
rotResult[1] = bRCoordTmp[1];
}
return rotResult;
Immediately I know there are a few issues dealing with the way this formula works out in regards to coordinate planes and the way that c# and html/css both render visual elements, I've been running small experiments to offset against those and nothing seems to be getting any closer to a solution, any ideas?
The answer was to first calculate the image X,Y offsets from the top left corner of the object while rotating the image in c# via TranslateTransform and RotateTransform to prevent clipping, then independently calculate the proper X,Y coordinates for the object using the left most center point of the object after that it was only a matter of making a few conditional statements to deal with the quadrant based calculation differences

How do I properly setup a texture position using XNA/Monogame VertexPositionTexture on a circle

I am using the following to create a circle using VertexPositionTexture:
public static ObjectData Circle(Vector2 origin, float radius, int slices)
{
/// See below
}
The texture that is applied to it doesn't look right, it spirals out from the center. I have tried some other things but nothing does it how I want. I would like for it to kind-of just fan around the circle, or start in the top-left end finish in the bottom-right. Basically wanting it to be easier to create textures for it.
I know that are MUCH easier ways to do this without using meshes, but that is not what I am trying to accomplish right now.
This is the code that ended up working thanks to Pinckerman:
public static ObjectData Circle(Vector2 origin, float radius, int slices)
{
VertexPositionTexture[] vertices = new VertexPositionTexture[slices + 2];
int[] indices = new int[slices * 3];
float x = origin.X;
float y = origin.Y;
float deltaRad = MathHelper.ToRadians(360) / slices;
float delta = 0;
float thetaInc = (((float)Math.PI * 2) / vertices.Length);
vertices[0] = new VertexPositionTexture(new Vector3(x, y, 0), new Vector2(.5f, .5f));
float sliceSize = 1f / slices;
for (int i = 1; i < slices + 2; i++)
{
float newX = (float)Math.Cos(delta) * radius + x;
float newY = (float)Math.Sin(delta) * radius + y;
float textX = 0.5f + ((radius * (float)Math.Cos(delta)) / (radius * 2));
float textY = 0.5f + ((radius * (float)Math.Sin(delta)) /(radius * 2));
vertices[i] = new VertexPositionTexture(new Vector3(newX, newY, 0), new Vector2(textX, textY));
delta += deltaRad;
}
indices[0] = 0;
indices[1] = 1;
for (int i = 0; i < slices; i++)
{
indices[3 * i] = 0;
indices[(3 * i) + 1] = i + 1;
indices[(3 * i) + 2] = i + 2;
}
ObjectData thisData = new ObjectData()
{
Vertices = vertices,
Indices = indices
};
return thisData;
}
public static ObjectData Ellipse()
{
ObjectData thisData = new ObjectData()
{
};
return thisData;
}
ObjectData is just a structure that contains an array of vertices & an array of indices.
Hope this helps others that may be trying to accomplish something similar.
It looks like a spiral because you've set the upper-left point for the texture Vector2(0,0) in the center of your "circle" and it's wrong. You need to set it on the top-left vertex of the top-left slice of you circle, because 0,0 of your UV map is the upper left corner of your texture.
I think you need to set (0.5, 0) for the upper vertex, (1, 0.5) for the right, (0.5, 1) for the lower and (0, 0.5) for the left, or something like this, and for the others use some trigonometry.
The center of your circle has to be Vector2(0.5, 0.5).
Regarding the trigonometry, I think you should do something like this.
The center of your circle has UV value of Vector2(0.5, 0.5), and for the others (supposing the second point of the sequence is just right to the center, having UV value of Vector2(1, 0.5)) try something like this:
vertices[i] = new VertexPositionTexture(new Vector3(newX, newY, 0), new Vector2(0.5f + radius * (float)Math.Cos(delta), 0.5f - radius * (float)Math.Sin(delta)));
I've just edited your third line in the for-loop. This should give you the UV coordinates you need for each point. I hope so.

Categories

Resources